Get in touch

Three key takeaways from Microsoft Ignite 2024

In late November, Microsoft Ignite 2024 lit up Chicago, Illinois, for five days. Amidst snowy weather and a packed agenda, thousands gathered to explore the future of technology, including groundbreaking developments in AI, multi-cloud, and security.

Luke Matthews, Brennan’s Head of Cloud and attending on behalf of Team Brennan, managed to pack a staggering 23 sessions into just four days, furiously burning through notepads along the way.

Here are his three key takeaways, as viewed through the lens of customers, partners, and technologists.

1. AI & Copilot: From Assistants to Autonomous Agents

The buzzword of the conference? Copilot. Microsoft’s vision for Copilot extends far beyond task automation, presenting three levels of AI engagement:

1. Request/Retrieve
Think of this as a smarter search, gathering information for specific tasks.

2. Task On-Demand
AI handles specific tasks, such as drafting reports or scheduling follow-ups.

3. Autonomous Agents
These AI tools can plan, learn, and escalate tasks as needed, mimicking an executive assistant or even a full team driving a business process.

A standout feature was Copilot Actions, which combines the intelligence of Outlook rules with Power Platform functionality. This tool can proactively gather information, schedule tasks, and summarise results—perfect for report generation or streamlining executive workflows.

For businesses, the key to unlocking AI’s potential lies in mapping these roles to tangible benefits. For instance, Microsoft shared that a Copilot agent was able to do the work of 13 full-time employees handling internal information requests for their own People & Culture department.

For technologists, the potential runs even deeper. A Copilot lab session demonstrated how mid-level developers could use Copilot in GitHub to modernise apps, including adding observability features. Autonomous agents may soon augment or replace certain Power Platform workflows, making solutions more efficient and scalable.

2. Multi-Cloud Management: Simplifying the Complex

With businesses navigating multi-cloud environments, Microsoft is positioning Azure as the ultimate solution for simplifying cloud complexity. Azure Arc stood out as a pivotal tool, offering streamlined inventory management, automated Server OS patching, and governance across hybrid and multi-cloud environments—all accessible through a single Azure portal.

Azure Arc enables OS patching with just one reboot per quarter, lifesaver for IT teams managing enterprise environments. It also helps reduce “cloud sprawl,” addressing the challenge of tracking workloads and assets across platforms.

For partners, Azure Migrate adds another layer of value by creating business cases for cloud migrations and executing them. This now includes re-platforming workloads to Azure PaaS and/or a hybrid solution, remaining on premises with Azure Local.

Despite these advancements, resiliency remains a pressing issue. Where only 13% of businesses currently consider their cloud implementations resilient, Microsoft claim that 74% of outages could be prevented with proper resilient deployment practices—a figure that underscores the importance of building with tolerable downtime in mind.

FinOps driving Cloud Maturity. From a partner perspective, multi-cloud management opens massive opportunities. Microsoft is encouraging partners to think bigger, leveraging tools like Azure Arc and Azure Migrate to not just manage infrastructure but to re-architect solutions that deliver long-term ROI.

3. Security by Design: The Microsoft Mandate

Security was a central theme throughout Ignite, with Microsoft emphasising its shift to “security by design, by default, and in operations.” This approach prioritises proactive measures over convenience, sacrificing some interoperability to achieve a stronger security posture.

Microsoft’s Mean Time to Remediation (MTTR) for Security internally is under 28 minutes, setting a high benchmark for the industry. New updates in Microsoft Defender for Cloud include Copilot capabilities that can automatically detect and fix vulnerabilities, reducing the burden on DevSecOps teams.

One particularly intriguing development is Purview’s expanded functionality, which now includes tracking risky usage of tools like ChatGPT and other large language models (LLMs). For businesses embracing AI, this adds an essential layer of governance and risk management.

Partners were also encouraged to adopt Microsoft’s mindset: build resilient systems, assume breaches, and focus on reducing MTTR. The analogy shared during the event was compelling: “We build irrigation systems; we don’t refill buckets of water.” This shift from reactive to proactive security is critical for maintaining trust and uptime in today’s interconnected environments.

Microsoft Ignite 2024 offered an exciting glimpse into the future of technology, with innovations that promise to reshape how businesses operate, partners deliver solutions, and technologists approach their craft.

Whether it’s deploying Copilot agents, simplifying multi-cloud management, or building resilient systems, one thing is clear: the opportunities are endless—but only for those ready to act. Thanks to Luke for gathering and sharing his insights.

Continue the conversation on our LinkedIn page and see how these developments could shape your business strategy.

No longer a fringe curio, AI has evolved from futuristic concept into an all-consuming technology. And while the first wave of viable AI learned to walk with chatbots and basic automations, AI is coming of age as organisations seek to fuse it with real-time data to unlock efficiencies and opportunities. This is the era of dynamic data, and companies are beginning to realise just how critical data maturity is to make it reality.

If the first wave of viable AI was all about chatbots—linear automation that answered simple queries framed as preset questions based on predefined data—businesses are now looking to flex beyond set outcomes to integrate AI with real-time operational data. Think retail companies merging AI with live sales data, or member organisations feeding AI with member information to personalise experiences and deliver timely insights.

In essence, businesses are realising data is at its most potent when it’s current. While point-in-time data provides useful historical context, real-time data is the fuel AI needs to make accurate predictions and deliver timely insights. Instead of relying on last quarter’s sales reports, services like Azure Synapse Analytics enable the integration of real-time operational data with AI models, accelerating time to insight and giving businesses the ability to tweak operations on the fly. It’s this shift that is redefining how businesses interact with their data.

Where are you? Modernising or innovating.

To fully realise the benefits of AI, businesses need to move beyond basic data storage and management. They have to strive for data maturity.

Which means that, broadly speaking, businesses are either modernising or innovating. For the modernisers, shedding siloed systems and centralising data onto modern cloud platforms – like Azure SQL Database or Azure Data Lake – are potent solutions for organisations looking to move from siloed systems to scalable cloud platforms.

Wherever you sit, the success of both approaches hinges on one critical element: governance. Data without governance is chaos, and chaos doesn’t scale well.

Data governance. The key to scalable AI.

Governance—an often-overlooked aspect of data management—is critical for businesses looking to scale AI. At its core, data governance ensures data is properly managed, secured, and accessible to the right people. In the AI era, governance also plays a crucial role in helping AI understand and interpret data correctly.

Organisations successfully governing their data find it becomes much easier to scale their AI ambitions. By treating each dataset as “endorsed”—complete with tagging covering such things as data owner, subject matter expert, along with clearly defined security roles—businesses can create a consistent structure across their entire data ecosystem. It’s a function Purview was built for. This structure not only improves data management but also ensures AI can operate efficiently and effectively.

Practical steps for implementing data governance.

Implementing data governance needn’t be overwhelming. In our experience, applying a few key principles to data subsets before scaling up works better than an all-in, fix-all approach. Here’s a simple roadmap for getting started:

1. Identify key data owners and stakeholders: Start by assigning clear dataset ownership. These individuals will be responsible for data accuracy, quality, and security. Ensure you have subject matter experts who can oversee specific areas, like finance or customer data, to help maintain the quality of these datasets.

2. Define security and access controls: It’s critical to establish who can access your data and what they can do with it. Creating role-based access controls using a solution like Entra ID ensures that only authorised individuals can view, modify, or export sensitive information. This step is especially important when integrating AI, as AI needs to adhere to the same security and access controls.

3. Establish metadata and tagging practices: Metadata provides critical context to your data, helping both humans and AI interpret it accurately. By tagging data with relevant information—timeframes, sources, subject areas—you can make it easier to manage, search, and use for AI-driven insights. Metadata also helps AI understand how to interpret or make recommendations based on the context of the data.

4. Adopt a consistent governance framework: Once you’ve established governance to a subset, apply the same framework to the rest. This ensures consistency and scalability, making it easier to introduce new datasets or expand AI applications.

5. Monitor and adjust: Data governance is not a “set-and-forget” strategy. Regularly review your processes and adjust as your business evolves. AI applications may demand different types of data or more frequent access to real-time information, so your governance model should adapt accordingly.

The benefits of prioritising data.

When data is prioritised and governed correctly, organisations experience a shift. They move from reactive problem-solving to proactive planning. Data starts to drive business strategy, leading to more informed decisions and greater alignment between business and technology. Teams know what’s happening now, can predict what’s coming, and can feel confident in their data’s accuracy.

Moreover, when data strategies are aligned with business goals, everyone wins. It’s a calmer, more coordinated way of working, where both people and AI have a clear understanding of the data’s structure and purpose. When data strategies are disconnected from business strategies, even the most cutting-edge data platform will stagnate. It’s like a ship without a rudder—data solutions veer off course, while business leaders tack on in a different direction. When data is prioritised and aligned with where the business is heading, organisations are more likely to stay focused and calm, even during times of disruption or rapid technological change.

What C-Level executives need to know about AI and data.

For CDOs, CTOs, or other executives responsible for data, understanding the strategic, tactical, and practical approaches to data management is critical.

Strategically, executives should ensure their organisation is on a modern data platform, should engage in proactive conversations about the implications of AI (how might AI disrupt your industry, how can you stay ahead of those changes in the next 12 months, three years, or even the next five?), then have a clear roadmap for how AI fits into your business strategy.

Tactically, a governance framework is non-negotiable. Even if your organisation isn’t ready to fully embrace AI, setting up the right governance principles now will pay off down the line. It’s easier to introduce AI when the underlying data is clean, secure, and well-governed.

Practically, executives should look for specific processes or areas where AI can be introduced to deliver immediate value. Whether it’s improving customer service, streamlining operations, or identifying new revenue opportunities, AI should be seen as a tool that can unlock real, measurable benefits when applied thoughtfully. And with solutions like Azure OpenAI Service now freely available, building your own copilot and generative AI applications is becoming increasingly easier.

Building a future-ready organisation.

AI is no longer just a buzzword; it’s a powerful tool reshaping how businesses operate, compete, and innovate. But to truly unlock the potential of AI, businesses must first prioritise their data. That means investing in data maturity, setting up strong governance frameworks, and aligning data strategies with business goals.

For data leads and technology heads, this is undoubtedly an exciting time. AI is opening doors to new possibilities, from real-time decision-making to predictive analytics. By ensuring your data is well-structured, well-governed, and always up to date, you can harness AI to drive lasting value for your organisation.

Every day’s a school day. Doubly so in an era where technological disruptions are becoming more prevalent. So how can Australian schools and universities build resilience into IT?

Introduction

Never let a good crisis go to waste. And with what was labelled as one of the largest IT outages of all time now in the rearview mirror, it’s vital we learn the lessons. Whether its primary schools transitioning to cloud-based systems or universities managing hybrid learning environments, the need for resilient infrastructure has never been more crucial. But what practical steps can educational institutions take to better prepare for future disruptions?

In the words of the philosopher Alain de Botton, resilience is “A good half of the art of living.” In the education sector, resilience may also be the smartest half of learning; it’s about maintaining the ability to teach and learn under pressure. The global outage earlier this year reminded us of how deeply connected we are, not just within schools or campuses, but across the technology landscape that supports learning.

Many breathed a sigh of relief that the incident wasn’t the result of a malicious attack. But that does little to comfort the many institutions and students affected by interruptions in online learning, lost productivity, or delayed assessments. For schools relying on cloud-based systems, Learning Management Systems (LMS), and remote access for staff and students, the outage showcased how our interdependence in technology can be both a strength and a vulnerability.

Even if your institution wasn’t directly impacted, odds are, it felt the ripple effects. When technology fails, it doesn’t matter how well-maintained your internal infrastructure is when external partners or providers hit a wall. Which brings us to the concept of resilience: continuity is how a school or university reacts to crises, but resilience is what helps prevent them in the first place.

In the wake of this global disruption, it’s clear that education needs a renewed focus on technology resilience is neede.. Here are five areas of focus for schools and universities.

1. A Connected Response: Reflecting on Readiness

When the emergency protocols were triggered, what happened? This recent crisis was a stress test for many schools and universities, revealing how prepared—or unprepared—they were. Did your institution have a disaster recovery plan ready to go? Were backup systems for cloud-based platforms functional and easily accessible? Did staff and administrators know what to do, or were they left scrambling?

Conducting a Post-Incident Review (PIR) is essential. Look beyond just the technical fixes. Was communication clear and effective? Were students and staff kept informed? How did your external partners perform? A thorough review, supported by independent third-party experts, can offer critical insights into what went right and what needs improvement in future scenarios.

2. The Human Element: Cool Heads and Steady Hands

While automation can take us far, this outage demonstrated that when systems break, human intervention is still critical. During the outage, tech teams worked tirelessly to fix issues across the globe, some even driving to remote locations to restore service. This applies to schools and universities as well. Whether it’s a network administrator working through the night to get the LMS back online or an IT support officer helping teachers navigate temporary disruptions, the human touch is irreplaceable.

For schools with limited internal resources, it’s worth considering partnerships with MSPs or outsourced IT support that can step in quickly during a crisis. Ensuring your team, whether internal or external, is trained, responsive, and able to work under pressure is key to bouncing back from technical disruptions.

3. Know Your Ecosystem: Interdependence in Education

In education, IT systems don’t exist in silos. From primary schools using national assessment platforms to universities relying on global research networks, educational institutions are part of a much larger ecosystem. This means that understanding your technological interdependencies is critical for resilience. When one system fails, the effects can quickly cascade.

For example, many schools rely on educational software providers. Do you know how resilient these partners are? How will an outage affect your ability to access critical learning materials or online assessments? Understanding where the vulnerabilities lie within your extended network—whether it’s a cloud service provider, an LMS vendor, or a national infrastructure—is crucial for resilience planning. Engage with your partners to assess their risk preparedness and ensure they are on the same page when it comes to incident response.

4. Disaster Planning: A Comprehensive View

Educational institutions face a range of risks, from cyber threats to natural disasters and IT failures. While cyber risks have dominated recent headlines, the outage highlights the need for a broader view of risk. Whether it’s a server going down during exams or a data breach impacting student records, it’s essential to have robust contingency plans for all possible scenarios.

Recovery is not just about restoring data—it’s about ensuring continuity of learning. Schools and universities should regularly review and test their disaster recovery plans. Do you have clear protocols for when online learning platforms go down? How quickly can you recover access to digital learning resources? How do you communicate with students, teachers, and parents during a disruption?

5. Building a Culture of Resilience: From the Ground Up

Resilience in education is not just about having the right technology in place—it’s about fostering a culture that prioritises preparedness and adaptability. If resilience and contingency planning weren’t already a topic of discussion at the leadership level, they should be now. Educational institutions, from school boards to university councils, must embed resilience into their technology strategies.

This means involving everyone—teachers and administrators—in regular testing and planning. Ensure that staff have access to critical recovery plans and know how to act in an emergency. Regular drills, clear documentation, and ongoing training are essential. Importantly, resilience planning should extend beyond the IT team. Make sure the educational leadership is invested and aware of the importance of a proactive, rather than reactive, approach to IT crises.

Conclusion

The global outage was a painful wake-up call for many sectors. Education is no exception. Is it possible to predict or avoid future disruptions? No. But schools and universities can take steps now to build resilience into your IT, ensuring that when the next crisis hits, you’re prepared to keep teaching and learning. In education, where every lost day can impact outcomes, the cost of proactive resilience planning will always be less than the price of inaction.

Bet on the future—it’s going to happen anyway.

2024’s Gartner IT Symposium/Xpo 2024 – held on the glorious Gold Coast in mid-October – has now wrapped. Fresh from shaking out the sand and washing away the saltwater, we asked three Brennan delegates to pen postcards of their key takeouts.

In mid-September, we were thrilled to participate in one of Australia’s premier technology events – the Gartner IT Symposium/Xpo 2024. With several of our team meeting with CIOs and IT executives, sitting in on sessions, and our founder and MD, Dave Stevens, hosting a theatre talk, we asked three of our representatives for their reflections. And we end with our own summary of the event’s all-consuming topic: AI.

Duncan Ayres

Enterprise Business Development Manager

Over recent years, the leading topic at tent-pole events like Gartner was security – understandably so given the Optus and Medibank breaches. But this year marked the utter domination of AI. One common theme I picked up on was the recognition that a governance wave needs to wash through organisations before AI can be fully utilised. Aggregating, organising, and cleaning data are hugely complex tasks. For the most part, these are cleanly contoured mechanical tasks, and most organisations we speak to are on that journey. But it’s the more nuanced area of data governance that’s more nebulous right now.

Organisations are (rightfully) concerned about how data might be used, who has access to it, and how AI might introduce unwanted bias. My hunch is there will be a growing consideration of the principles and guidelines that govern the ethics of data collection, processing, and use. This was brought to life brilliantly in a session by Ann Larkins, Executive Director, and CIO at Australian Red Cross Lifeblood, as she demonstrated the crucial role of ethical data and the governance considerations inherent to AI initiatives.”

Nick Sone

Chief Customer Officer

“I’ve always enjoyed attending Gartner and seeing how the friction of innovation sparks conversations focussed on implementation. As technology evolves, so does the lead theme. (Spoiler alert: AI took star billing this year.) But after conversations with multiple CIO’s, I couldn’t help feeling there was a dissonance between the promised sunlit uplands of what AI might deliver with the lived reality of what organisations are experiencing.

What came through for me in many of those conversations is that robust discussions on a range of issues – effective change management, meaningful productivity uses cases, end-user productivity gains using existing tech, benchmarking ROI – continue to dominate within organisations.

One throughline that connected it all was the recognition that the AI embrace is complex and balancing that complexity with a users’ capacity to effectively implement AI initiatives isn’t as straightforward as imagined. Yet. As a result, some are adopting a “wait-and-see” position. But many more are taking measured, incremental and repeatable steps to adoption. Microinnovations is very much alive.”

Peter Soulsby

Head of Security

“AI is dead. Long live AI. I realise this sounds provocative. But having wandered the Xpo floor, dropped in at booths, and sat in on numerous sessions – many of which were incredibly illuminating – my overriding takeout was that the AI-centric agenda didn’t entirely quench the delegate thirst for definitive answers on how and where to leverage it.

At a macro level, I suspect AI (at least in the near-term) won’t save money, nor will it be the cure-all, fix-all silver bullet businesses are hoping for. At a macro level it could be argued AI is an incredible technology in search of utility. But zoom in a little, and targeted uses cases aligned with anything that looks like AI – be it machine learning, robotics, automation, LLMs, or Generative AI – is likely to turn the dial more effectively and efficiently over time than sweeping macro applications.

It’s a view that was elegantly expressed by Dave Steven’s, our founder and Managing Director, in his “Secure. Automate. Evolve. Repeat.” theatre talk, where he made a powerful case for Micro Innovations – the application of new technology to unlock incremental changes that deliver huge wins. Intentionally designed to be small but core to an organisation’s digital transformation strategy, these targeted, tailored, customised and incremental innovations can stimulate profound organisational change when done well.​

When I think about the security implications of AI, one of the considerations I hear across the industry is that company-wide use cases are leaving organisations susceptible to more unintended consequences than targeted AI applications, which don’t need to be governed so tightly, and aren’t as reliant on stringent security guardrails.

And on a non-AI front, I found Marty Resnick’s session – “The future of computing” – fascinating, especially his take on how the frenetic competition driving innovations in space technology will impact the rest of us earthbound mortals over the next 5-10 years.”

All in on AI.

With over 70 AI-centric and AI-adjacent topics crammed into the three-day event (and we may be underquoting), there’s no way to neatly capture the key themes in a compact summary. But we’ve given it our best shot.

Technology leaders pursuing AI initiatives were reminded to focus on three core outcomes: business, technology, and behavioural

For business, the emphasis should shift from broad AI strategies to targeted productivity gains, while treating AI investments like a portfolio.

On the technology front, leaders are encouraged to manage AI demands across the organisation, not just within IT, and craft AI systems that suit their unique needs.

Just as important is the human component – co-designing AI processes with the people impacted, and ensuring their experience is prioritised alongside business and tech goals.

To scale AI effectively, leaders should consider incorporating proven frameworks and prioritise use cases that align with future needs. Responsible AI must be at the forefront, as should consistent investments in data governance and AI literacy, via hands-on training. Teams should be equipped with AI-powered tools to enhance learning and productivity, ensuring skills development is fast and impactful.

Increasingly dependent on technology to manage operations, mining operators have effectively become tech companies. But more tech means more opportunities for disruption. In this new article from Australian Mining, Peter Soulsby, Brennan’s Head of Security, digs into the challenges and solutions.

Cybersecurity is critical for an increasingly digitalised Australian mining industry.

In recent years, the digital transformation has revolutionised the way mining companies operate.

Delivering a multitude of benefits, including reduced costs and improved operational performance, introducing digital solutions is a no-brainer.

However, an aspect of digitisation that isn’t widely discussed in the Australian mining industry is cybersecurity. This is something Brennan is working to change.

Drawing on extensive experience in the mining sector and boasting over 27 years of IT expertise, Brennan has evolved into one of Australia’s leading and independently owned systems integrators.

With cybersecurity woven across their business and within their solutions, Brennan understands that improved technology availability has created more opportunities for threat actors to interfere with mining operations.

“With mining companies increasingly using technology to manage their operations, they’re effectively becoming technology companies,” Peter Soulsby, Brennan’s head of security, told Australian Mining.

“The challenge with that is more technology means more opportunities for disruption.

“Disruptions can take many forms. They can be the unintentional kind, created by users, employees or contractors. Or they can be intentional, created by threat actors such as hackers.

“The modernisation of mining has inadvertently created more risks, which is why cybersecurity is just as important in mining as it is in any other industry.”

Utilising cybersecurity technology can help users understand risks, such as what could go wrong and how those events could occur and unfold. In turn, this can inform the strategies needed to protect businesses against unwanted outcomes.

“Cybersecurity incidents can erode trust, potentially affecting revenue and opportunities to generate new business,” Soulsby said. “On the flipside, cybersecurity is an investment that will protect your operations, revenue and profit, as well as establish your brand as trusted and safe.”

Brennan proactively identifies and assesses the risks identified as important to its mining customers, using these as a starting point to find the best solutions for their needs.

“We map business priorities and risks with IT and cybersecurity priorities and risks. It’s crucial that we understand what’s important to a business, and don’t just deliver a solution for the sake of it,” Soulsby said.

“When you understand what those risks are and what they mean, we can implement controls to mitigate those risks.”

As an ISO 27001 certified partner, Brennan’s outsourced IT services are underpinned by a comprehensive cybersecurity framework that follows the Essential Eight mitigation strategies.

These include patch applications, patch operating systems, multi-factor authentication, restricting administrative privileges, application control, restricting Microsoft Office macros, user application hardening, and regular backups.

“We’ve developed cybersecurity reference architecture to ensure best-practice cybersecurity practices are in place within Brennan and across our client base,” Soulsby said.

“We use this architecture to help our clients efficiently deliver cybersecurity, helping them both understand the importance of cyber protection and protecting their company from pressing cyber threats.”

Like many sectors, the mining industry is facing cyber challenges related to identity, with corporate email addresses and passwords susceptible to being compromised.

In large part, this is due to different authentication systems requiring various forms of identity, often resulting in the creation of several identity systems that then makes identity management more difficult.

“What Brennan does is successfully implement solutions that enable better identity management. It means that commonly recurring actions, such as onboarding a new starter, keeping track of a contract’s lifecycle, or offboarding an employee at the end of their employment, is made simpler and more secure,” Soulsby said.

“We view security solutions through three lenses: what the security market is telling us and what emerging technologies are out there; securing managed services; and client feedback and demand for new capabilities.”

Through its team of certified experts, Brennan removes the complexity around cybersecurity technology by evolving the links between cybersecurity and IT, as well as operational technology (OT).

“By managing identities across IT and OT and breaking the boundaries that previously existed between the two, we’re proving cybersecurity can be done across the entire mine value chain,” Soulsby said.

This feature appeared in the  August 2024 issue of Australian Mining.

Never let a good crisis go to waste. And as the dust settles on what has been labelled as the largest IT outage of all time, many are using the episode to entirely reboot the concept of resilience. But what are some of the practical considerations IT organisations can harness to be better prepared for whatever comes next?

In the words of the philosopher Alain de Botton, resilience is, “A good half of the art of living”. As the waves of chaos unleashed by the recent global outage (#CrowdStrike) recede, it turns out that resilience may also be the smartest half of the cost of business.

Many were heaving a sigh of relief that the incident wasn’t malicious. Which must come as cold comfort to the many businesses and their customers who lost billions in revenue, millions of hours in inconvenience, and the expense of technical remediation.

The global outage is another timely reminder of not only how deeply interlinked the world’s technology infrastructure, systems, and software is, but how this interdependence is both its strength and its weakness.

Even if your organisation wasn’t directly impacted, odds are it was incidentally. When the system folds, it doesn’t matter how robust the walls of your house are when a wrecking ball crashes into the common wall you share with your neighbours.

Business continuity is how a company reacts in times of trouble. Resilience prevents it. In this particular incident, the software and hardware vendors in question have released mea culpas and long lists of practical, technical, and ethical upgrades. All of which is needed. But perhaps one of the most discussed and arguably most beneficial upsides is the renewed focus on systems and infrastructure resilience, and what it might look like.

These are five thought starters on our minds here at Brennan.

1. A connected response. A thorough review.

When you smashed the “Break in case of emergency” glass, what happened? Much will have been revealed over the past month, including the durability of your strategies, plans, and protocols. Did a well-oiled Disaster Response kick in? Were back-up systems in place and ready to deploy? Were recovery protocols followed? Was your in-house and partner support able to turn on a dime? Was everyone singing from the same hymn sheet? Had you run drills to pressure test your responses? If so, how did the actual delivery match up? A Post Incidence Review (PIR), prepared internally and augmented by third-party providers, can be an indispensable tool for forensically analysing what happened, and what didn’t.

2. Cool heads. Steady hands.

As bad as the outage was, the killer wasn’t the disruption alone. It was the physical hands-on intervention needed to remediate the 8.5 million affected machines – some of which were in extremely far-flung locations or physically inaccessible spots. Automated remediation works up to a point. But in this instance, there was no substitution for cool heads and steady hands. In short, living breathing humans, responsive to calls, proficient in the challenges, and adept in knowing and deploying the fixes. From CIOs rolling up their sleeves and going desk-to-desk to roll out resets to tech specialists driving through the night to outback locations, stories of how the tribe came together was a humbling reminder of how the tech community rallies together during a crisis.

3. Know thy neighbours.

One of the key characteristics of any organisation’s key value chain is understanding that the risks no longer sit just within your own walls. They’re interlinked with the customers you serve, the partners you align with, and even their partners. When one goes, all go. While the Security of Critical Infrastructure (SOCI) Act holds organisations to account across eleven sectors earmarked as critical to Australia’s sovereignty, security, and economy, events of the nature we’ve just experienced highlight the value in knowing where your neighbours’ systems and infrastructure intersect with yours, how response-ready they are in times of crises, as well as their appetite for and activity in mitigating risks.

4. Think disastrously.

Whether it’s a cyber breach, an IT failure, human error, or a natural disaster, the damages inflicted by unexpected events may look different, but the impact hurts the same. Over the past few years, cyber risks and cyber threats have dominated discussions on prevention and mitigation strategies. But the recent outage has underscored the need for organisations of all stripes to take a 30,000 ft view to risk tolerance, risk assessment, and risk management. Whether it’s a renewed focus on recoverability (driven by robust and always-on back up protocols), a drive to inject more diversity across infrastructure and operating systems, a doubling down on policies and procedures, or a mix of all, there’s no substitute for prosecuting a higher order interrogation of what can go wrong to inform the program of works needed to avoid the pain.

5. Be strategic. Get buy in.

Organisations can’t control external threats. But they can control their preparedness. And if resilience and contingency planning wasn’t already on the Board and ELTs radar, it is now. Core to this is cultivating a culture that’s organisationally baked in from top to bottom, embedding robust contingency plans that encompass infrastructure and key business operations. These need to be vetted, assessed for effectiveness, presented to business leads, and run through with the boots on the ground, inside and outside of your organisation. Have you scheduled regular backup and recovery strategy drills? Have loss-of-access scenarios been factored in? Who has access to the incidence response plan? Is it regularly reviewed for clarity and efficacy? Should you adopt a proactive posture with pen testing? What are the critical controls you need in place at 3 months, 6 months, a year? These are some of the starting points worth considering.

The worldwide outage was painful, costly, and unavoidable. Is it possible to predict, deflect, or avoid all future risks? Of course not. But pain breeds innovation, innovation fosters resilience, and the price of proactivity will always outweigh the cost of inertia. Bet on the future. It’s going to happen anyway.

Residents, clients, patients, and the people who care for them in aged care settings are the living embodiment of resilience. Could their lifelong lessons also set the template for IT providers? John Sutherland, HammondCare’s Chief Information Officer, and Dave Stevens, Founder and Managing Director of Brennan, sat down with Aged Care Insite to find the answers.

Defined as ‘the capacity to withstand, adjust to, or recover from misfortune, difficulties, or change’, resilience is, in the words of the philosopher Alain de Botton, “A good half of the art of living.”

Resilience picks us up after knock-backs, conditions us to adapt to unavoidable change, and trains us to move on when we should. People in aged care settings and the staff who provide their care are the living embodiment of resilience.

Earlier this year, we made the case for IT maturity as the answer to today’s opportunities and tomorrow’s challenges in the aged care sector. But could resilience prove just as powerful?

In an era upended by digital transformation, an economy buffeted by unprecedented headwinds, and an aged care sector grappling with generational challenges, resilience and continuity have become operational necessities.

The compounding power of continuity.

HammondCare, a leading care provider, are pressing the charge on both fronts. The core services that knit together their daily operations – Service Deck and telephony are two prime examples – can have a profound effect on the continuity of patient care. Often taken for granted when working as they should, everyone feels their absence when they don’t.

“The outsourced service desk and telephony services are pivotal in bolstering resilience across critical areas of our operations,” says John Sutherland, HammondCare’s Chief Information Officer. “These foundational services are significant for any organisation.”

The provision and management of resilient services fulfil a number of HammondCare’s core operational needs. But it’s the ripple effect of this resilience that HammondCare value.

Systems resilience is creating an efficiency lift. In turn, those efficiency gains are expanding HammondCare’s organisational capacity to focus on projects and areas that matter most to the business – improving quality of life for people in need.

“HammondCare, like most providers across the aged care sector, have been making significant investments to modernise systems and processes,” explains Sutherland. “Service desk and telephony, when managed effectively, ensure seamless operations. When employees can rely on timely and efficient issue resolution, it frees our internal teams to prioritise impactful projects, all with the aim of improving the quality of care for our clients, residents and patients.”

Weathering the storms of change.

Resilience isn’t a position confined exclusively to aged care settings. It’s a posture a multitude of sectors are mindful of, as Dave Stevens, the founder and Managing Director of Brennan, Australia’s leading systems integrator and outsourced IT partner, explains.

“Resilience is not just about weathering storms,” says Stevens “But thriving in the face of adversity. Resilience is about being prepared. It’s how well an organisation can absorb the stress of unforeseen circumstances and how swiftly they can restore critical operations.”

“This includes reducing vulnerabilities by eliminating single points of failure, and implementing robust data governance, back up and security.”

All of these issues were thrown into stark relief at the outset of COVID and were keenly felt across the aged care sector.

“As a black swan event that impacted everyone, COVID underlined how deeply interlinked IT resilience is to business resilience,” says Stevens.

“As it was unfolding, a majority of organisations had to redefine how they did business. They had to stand up new ways of working overnight. Post-pandemic, it’s been about how you attract staff back to the office, but also how you securely manage and govern remote workers and the rise of SaaS based platforms that aren’t on your network.”

Navigating the complexities.

Stevens is keenly aware that the challenges in creating business resilience are multifaceted and increasingly demand a diverse blend of skills spanning the entire IT environment, including cloud and infrastructure, networks, cybersecurity, applications, and telecommunications.

“As the technology mix has grown, so have the challenges in managing it all. Navigating those complexities requires best-of-breed partners. Not because organisations lack the will to take those challenges on, but simply because they don’t have the resources or headcount to do it in-house,” says Stevens. “With the technology and business landscape only set to grow in complexity, businesses should absolutely call on experienced specialists, rather than shouldering the burden of building up those specialist skills in-house.”

Sitting across it all are growing concerns about cybersecurity, and the steps needed to safeguard business operations.

“From a security perspective, the vast majority of incidents happen at the edge on a user’s device, rather than a system bug that gets exploited,” says Stevens. “With 24×7 Security Operations Centre and Network Operations Centre, Brennan remains vigilant, running in preventative mode to enable proactive security measures. Incident response capabilities need to be always-on and proactive – not only to prevent dips in service delivery, but to avoid reputational damage.”

The results of resilience.

But it’s the people on the frontline – the carers, support staff, and the residents – that are benefitting the most from a renewed focus on technological resilience.

“Overall, the employee experience when it comes to using technology has been smoother as a result of partnering with Brennan,” explains Sutherland. “The main benefits we’ve enjoyed relate to the support-at-scale for those at the point of care. Being able to respond in a timely manner, 24×7, when things don’t work the way we expect it to, helps us bring the best care to those we seek to support.”

“The partnership with Brennan has allowed us to deal with the day-to-day support needs of staff, improve processes, and reshape our business digitally,” explains Sutherland. “Hybrid working is the norm these days and that’s only been made possible through remote working, which we couldn’t do without support of Brennan.”

The feedback from HammondCare’s clients has been “very positive”, he says. “Across a range of managed service provider key performance indicators, Brennan’s response rates are consistently high with excellent first-time response rates, above average time-to-answer metrics, and very low call-abandonment rates.”

“We are also pleased with the calibre of the service and account management. In the rare cases we need to escalate matters to management, we have received great support.”

Resilience – that essential life skill so often attributed to ripe age – might give us the mindset to cope with trying times. But it might just arm organisations with the tools that help us prosper. Dave Stevens sums up: “At Brennan, we understand the importance of collaboration and partnership. By remaining truly connected with our clients, we ensure that our solutions are tailored to their specific requirements, driving mutual success. Our mission at Brennan is to build resilience and use that as a tool that empowers Australian businesses to thrive in the digital age and unlock their full potential.”

This feature originally appeared in the August 2024 issue of Aged Care Insite.

With the rise of as-a-Service delivery models, IT departments are faced with decisions beyond choosing the right technology. There is now a range of payment options to choose from, and the trick is to know which one is right for your company and your current project.

What are Capex and Opex?

Capital Expenditure (Capex) is normally used for major investment and is shown on the company’s balance sheet. The capital is exchanged for an asset, which can then be amortised and depreciated over its lifespan and can add value to the business. Operational Expenditure (Opex) is used for ongoing expenses. It’s shown on the profit and loss statement, and it isn’t exchanged for tangible assets. However, Opex is usually tax-deductible, which may offset the fact that it doesn’t deliver an asset that can be depreciated.

The first step is truly to understand the breakdown of IT-related costs. Along with the obvious costs of infrastructure, hardware and software, there are also ongoing maintenance and staff costs, hidden costs like the energy to power and cool the equipment or the cost of real estate within the office.

Costs: Capex vs Opex

Only once the business understands the complete costs involved can a decision be made between Capex and Opex – whether to choose the on-premise infrastructure that requires Capex, or a cloud service provider that requires Opex.

Conventional wisdom suggests that Opex is preferred for IT systems where possible because it’s usually cheaper. UC Berkeley Reliable Adaptive Distributive Systems Laboratory (RAD Lab) estimates that cloud providers have lower costs by 75 to 80 per cent compared with internal data centres.[1] So it may be true that a cloud-based solution is less expensive than an on-premise; however, there are other things to consider when it comes to deciding between Capex and Opex.

Purchasing hardware and infrastructure via Capex means the business owns the asset. The business may amortise and depreciate the asset for taxation purposes, and then extract value from it after its useful life by selling it (either whole or for parts). With careful maintenance, plenty of IT hardware continues to run without issue until well after its anticipated lifespan.

Owning the asset also means the business is stuck with it, even if something better or more useful comes along, which can be frustrating for businesses that want to leverage new and emerging technologies.

By contrast, an Opex model means that the business doesn’t own the asset – but is also not locked into hardware that may quickly become obsolete. By choosing a cloud approach, the business can get access to the latest technology without necessarily incurring higher costs.

Therefore, the Opex model means companies pay only for the capacity they use, which can be invaluable for organisations that experience spikes and lulls in demand. Instead of paying upfront for storage or computing power that may sit idle for days, weeks, or even months at a time, the business can pay less when demand is low and then ramp up when needed.

Summary: Capex vs Opex

The final decision of whether to choose Capex or Opex for IT needs depends on the company’s requirements and budget. With both payment methods having pros and cons, it makes more sense for businesses to decide on cloud versus on-premise solutions based on efficiency and agility, rather than cost alone.

[1] “Above the Clouds: A Berkeley View of Cloud Computing” by Michael Armbrust, Armando Fox, Rean Griffith, Anthony D. Joseph, Randy Katz, Andy Konwinski, Gunho Lee, David Patterson, Ariel Rabkin, Ion Stoica, and Matei Zaharia. Technical Report EECS-2009-28, EECS Department, University of California, Berkeley. http://d1smfj0g31qzek.cloudfront.net/abovetheclouds.pdf

As Australia’s mining industry continues to lean into the digital era, Mal Shafi, Brennan’s Head of Digital, sat down with Australian Mining to dig deep on one of the sector’s most precious commodities: data.

Brennan is helping mining companies enhance decision-making through its data and AI offerings.

It’s one thing for a mining company to have a range of data. But understanding and interpreting it to achieve key project objectives is a different ballgame entirely.

It’s something Brennan understands well.

With over 27 years of IT experience, Brennan has become Australia’s leading largest privately-owned and most trusted systems integrators.

“Key for us is understanding what a company’s objectives are first, then using technology to facilitate those goals,” Brennan Digital director Mal Shafi told Australian Mining.

“In this constrained economy, technological initiatives have to move the business forward while making a return on investment. And that’s what we do.”

One of Brennan’s key portfolios is data, comprising of an extensive range of services and solutions to designed help organisations flourish. And with the emergence of AI, data has quickly become an area of increased focus.

One prime example is where Brennan are enhancing resource optimisation on mine sites by using Azure AI, a platform within Microsoft Azure.

“By looking at the large scale of mining, we analyse data from multiple sources to provide a single comprehensive view,” Mal said.

“From there, mining companies can enhance decision-making around resource allocation, optimising machinery usage, labour, and raw materials to avoid downtime and unnecessary spend.

“As one of Australia’s leading Microsoft partners, we use Azure to automate tasks like equipment monitoring and maintenance schedules. This means we can reduce equipment downtime and operational costs through predictive maintenance.

“Using data to predictably rather than reactively enhance operations saves time and money in the long-term.”

Brennan’s support of AI solutions harnesses and optimises data to identify and analyse potential hazards so companies can ensure they are complying with workplace safety and environmental regulations.

“We can use Azure to improve geological data analysis to enrich exploration by identifying promising drilling areas, increasing the likelihood of discovering new deposits and optimising production planning,” Mal said.

“An old friend of mine is a geologist and they spend so much time in remote parts of the world collecting huge tranches’ of data and research to find the next productive mining spots.”

“But if you automate that, you’ll save time and won’t impact the ground as much because identifying higher yielding drill spots are tailored.”

Brennan is deeply embedded in the Microsoft ecosystem, allowing it to deliver unified, scalable and highly efficient platforms tailored to specific customer needs.

“To ensure ease-of-use and immediate value, Brennan has developed Quick Starts, pre-packaged, fixed price, and fixed outcome solutions that can be rapidly deployed,” Mal said.

Brennan’s data portfolio has many success stories. One case study involved Brennan building a predictive maintenance solution to predict the lifespan of a company’s critical assets.

“We have also managed data estates for mining companies by rightsizing and orchestrating the data to ensure its optimised,” Mal said.

“Previously, employees spent significant chunks of time carrying out database management.

By automating that process, those people employees are able to focus more on business-lead activities. Customer feedback has told us it’s the equivalent of gaining extra headcount.”

Brennan prides itself on relentless iterative improvement and modernisation, ensuring each digital solution it designs, builds and deploys is right for its customers. the most up to date.

“This proactive approach allows us to remain connected work really closely with our clients’ customers to meet their evolving requirements,” Shafi said. “One of the things they appreciate is our transparency, as well as our ability to look at trending data and go, ‘Hey, this is a common problem in the market, how can we improve it?’

“Brennan wants to be a strategic partner. for you. For every project, we our goal is to deliver the outcomes our customers want, providing them with real value. here to provide support.”

This feature appeared in the July 2024 issue of Australian Mining.

Copilot has the potential to lift efficiency and productivity to a higher plane for organisations of all sizes. But it pays to work through a pre-launch checklist to make the flight smoother.

Unless you’ve been cloistered in a remote mountain-top retreat, you’ll be keenly aware of AI’s everywhere, all-at-once ubiquity.

On the upside, AI has been loudly heralded for its awesome power and game changing potential to unlock productivity and turbocharge efficiency.

On the downside, AI comes with a large side dish of unanticipated risk. Data permissions and data integrity, to name just two. All of which makes it entirely understandable as to why companies short on in-house AI expertise (and, for now, that’s a lot of organisations) are hesitant on where or how to begin.

But starting carefully and strategically, using proven solutions from leading AI builders, like Copilot for Microsoft 365, and the guidance of a trusted IT partner, can be a canny way to board the plane of progress.

These are the steps you’ll want to consider to get your organisation Copilot ready.

Who is Copilot for?

With Microsoft nesting their AI offering across their application ecosystem—Teams, Outlook, Word, Excel, PowerPoint, and more—one of Microsoft Copilot’s drawcards is how their AI solution has been seamlessly and securely integrated into the already familiar user experiences most of us use daily.

And organisations that stand to benefit most from Microsoft Copilot most include:

Organisations looking to leverage AI in any capacity.
Whether analysing trends, enhancing decision making, supporting creativity, and coaching, or streamlining communication and collaboration, the applications for AI are endless. And here to stay. Like the introduction of PCs, email, and the internet, it makes business sense to master new and fundamentally transformative technology as it emerges.

Organisations with workloads already in Microsoft 365.
Because Copilot out-of-the-box can’t access or return data sitting outside of 365 tenants, it only comes into its own for organisations that have already migrated workloads into Microsoft 365.

Organisations looking to streamline functions.
One of the promises of Copilot is its ability to action common, repeatable tasks with far greater accuracy and speed than its human counterpart, making it ideal for organisations looking to streamline costs and resources associated with onerous administrative functions.

Who can fly Copilot?

Like private jet ownership, Microsoft initially launched Copilot to a select few, before extending its availability to any organisation willing to purchase 300+ licenses through a Microsoft Enterprise Agreement.

Then, in early 2024, Microsoft scrapped that limit entirely, opening the door for organisations of any size to buy and assign Copilot licences, either directly through Microsoft or via a credentialed service provider.

But those licences are not standalone. As of writing, Copilot is only available as an add-on, which means you’ll need one of the following active Microsoft licenses to use it:

Microsoft E3 & A3

  • Microsoft 365 E3
  • Office 365 E3
  • Microsoft 365 A3 for faculty
  • Office 365 A3 for faculty

Microsoft E5 & A5

  • Microsoft 365 E5
  • Office E5
  • Microsoft 365 A5 for faculty
  • Office 365 A5 for faculty

Microsoft Business

  • Microsoft 365 Business Standard
  • Microsoft 365 Business Premium

If you have one of these licenses, you’ll already have seen Microsoft flagging Copilot availability inside most Office 365 apps, including Teams, Outlook, Word, Excel, and PowerPoint.

And for apps outside of the Office 365 bundle? They enjoy their own dedicated Copilot, like Microsoft Copilot for Security, or Microsoft Fabric.

As of writing, there’s no floor on the number of licenses you’ll need. Which means you can kickstart your organisation’s Copilot experience with as few as one or two licenses.

Yes, there is an upfront cost (currently $44.90 per licence, per month). And yes, you’ll be expected to commit to at least one year upfront. But that should be ample time to trial Copilot with your company’s structure, workflows, and teams, without an onerous financial burden.

Taking to the air

Microsoft have put plenty of thought into making the experience of turning Copilot on as simple and as streamlined as possible, with dedicated controls within their admin centres and applications. But ensuring Copilot seamlessly slots into your organisation hinges on actioning some key fundamentals prior to launch, as well as organisational steps to aid its successful adoption and continued use.

1. Get your information ‘search-ready’.
Much of the magic behind Copilot is the orchestration of its Large Language Models (LLM), Microsoft Graph, and the Microsoft 365 apps. Although a users’ overall Copilot access is controlled by Entra ID, Microsoft Graph acts as data gatekeeper, working with Copilot’s Semantic Index to orchestrate information retrieval during search.

By design, Copilot only returns information users have explicit, Graph-reviewed permission to access. Which is why getting your data search-ready is so crucial. If your organisation has robust access policies and controls in place (or ‘just-enough-access’), users will only be able to retrieve data they’ve permission for, and nothing else. Even if you don’t plan on adopting Copilot, implementing ‘just-enough-access’ will improve your organisations’ overall information protection.

Helpful guide: Get started with Microsoft Copilot | Microsoft Learn

2. Switching Copilot on.
Housed within the Microsoft 365 administration centre is a Copilot set-up guide, a helpful wizard-based experience that will step you through the necessary prerequisites to leverage the full Copilot experience. These include the enterprise apps, services, and licenses you’ll need in place, as well as the assignation of available Copilot licenses.

Helpful video: How to get ready for Copilot | Microsoft Mechanics

3. Devote time to training.
Input equals output. Knowing how to create and structure strong prompts (aka ‘prompt engineering’) will improve the odds of tighter, stronger, and more relevant responses. Dedicated user training, especially for those unfamiliar with AI’s capabilities, will be invaluable. And as AI evolves (and it will), so too will Copilot, meaning users across the board will benefit from regular refresher training.

Helpful video: Tips for writing effective prompts in Copilot | Microsoft

4. Establish a centre of excellence.

Running in parallel with training, creating groups or channels for users and teams to share their experiences and ask questions is a potent way to identify and work with Copilot champions. Giving your people a platform to share what’s working for them (think prompts, personas, shortcuts), as well as what isn’t, creates a virtuous circle for adoption and improvement.

Helpful tools: Copilot adoption | Microsoft

Of course, AI is evolving rapidly. Almost daily. Microsoft Copilot is no exception. New updates, naming conventions, and integrations are rolling out all the time, so it’s worth working with a trusted and experienced Microsoft Partner, like us, to confirm you’re working with the right version and integrating it across your organisation properly.

chevron-down