Over the past year, the world’s biggest supplier of software for IT services has aggressively pushed the incorporation of the artificial intelligence system Copilot into its Windows 11 operating system and the most common applications businesses are likely to use on it.
Even Microsoft Office’s name was changed to Microsoft 365 Copilot, much to the confusion of everyone involved, given that this is not the same application as the one integrated into Windows 11 itself.
It is also confusing because Microsoft has a significant stake in OpenAI, the makers of Copilot competitor ChatGPT, which is also aggressively aiming to integrate itself into business processes.
Whilst there are specialist purposes that AI is useful for, they come with a lot of unintended consequences, as well as security and liability risks for businesses that it is essential to address before AI is widely integrated into your workflow.
AI Does Not Know How It Completes Tasks
A fundamental problem with LLM models such as Copilot is that they fundamentally lack the reasoning to understand how they reached the conclusion or solutions to tasks that they did.
For example, if an AI chatbot used for customer service is asked a question, it does not necessarily know the right answer, only what the right answer might look like.
In some very general scenarios, this is less problematic, but in other cases, it can lead to serious issues without any oversight.
AI-generated written copy is often filled with mistakes and hallucinations, where citations and sources are simply made up because the tool does not understand the difference between correct and incorrect information.
Similarly, code generated by AI can have severe issues if not overseen and debugged by a software engineer, and the delays this can cause can potentially offset any benefits.
Privacy Concerns
As soon as Copilot began to be implemented in 2024, there were already significant security concerns considering its use.
The BBC reported the concerns of the Information Commissioner’s Office (ICO) after being informed that Copilot’s Recall feature could capture sensitive information such as passwords, financial information, sensitive health information and other legally sensitive data.
Even a year later, in August 2025, The Verge reported that NLWeb had a major “embarrassing” security flaw that allowed remote users to read sensitive files simply by visiting a malformed URL, not helping the company’s precarious security reputation around AI.
Even when the system is working normally, Copilot collects a lot of user data, not entirely with a user’s informed consent, and exactly who accesses this data and how it is used has been particularly opaque.
Ultimately, Copilot should only ever be used to complete tasks that do not use sensitive information, but with closer integration of the tools with Windows as a whole, it becomes a more complex challenge for IT engineers and users to avoid potential legal liability.
Ethical Concerns
Several major AI systems are subject to legal challenges and ethical controversies, particularly with regard to how the internal structures and algorithms underpinning an AI agent can be subject to change either directly or indirectly through the integration of more training data.
OpenAI is subject to a significant and potentially business-changing copyright infringement lawsuit that GeekWire reports may also include Copilot due to the unethical and potentially illegal methods by which data was scraped to be incorporated into training data for these large language models.
There are also concerns about the ability of AI system owners to change the internal systems in ways that encourage biases or create potentially deeply unacceptable outcomes.
Part of the reason why the xAI agent Grok has not been fully incorporated into Microsoft’s Azure enterprise platform, according to The Verge, is that the system was modified multiple times, leading to it providing answers that professed support for truly unacceptable belief systems and conspiracy theories.
The risk that such answers could be delivered to customers with innocuous questions for AI customer service systems or social media account managers is one that some companies are not willing to take until they are certain that the tool will maintain their values, beliefs and levels of professionalism.
AI Can Cause More Problems Than It Fixes
The biggest mistake that any IT system can make is to assume that everyone in the company will have the same needs and interact with the same software in the same way.
The infamous saga of Lotus Notes, as reported in a 19-year-old article in The Guardian, highlights that a technically proficient and productive tool can become neither use nor ornament if integrated poorly or without forethought.
The same is even more true with AI. Whilst it can provide productivity benefits in the right hands, it can sometimes produce content, code and data that is so full of mistakes that it takes longer to fix them than to do the job manually.