Artificial Intelligence Is Incompatible With The Future Of Communication – Here’s Why

by

The future of work is no longer merely a concept, but a reality — Covid-19 has made sure of that. The pandemic has accelerated workplace innovation across sectors to the point of no return, with contemporary businesses now almost entirely reliant on new technologies simply to exist. 

 What role, then, does artificial intelligence (AI) have to play in this drastic shift? For some time now, I’ve firmly maintained the belief that AI would take over the vast majority of process-driven work within 15 years. However, with years of key developments in the world of work having recently been crammed into a matter of months, the future has unfolded very differently than we imagined. 

Rather than coming about through careful planning, companies have been thrust into this new way of working. Without doubt, many were unprepared for it and have had to move quickly to put in place remote working solutions to keep business going. They simply didn’t have the time to manage change and implement AI-driven solutions. 

However, many predict that we’ll see remote working becoming part of the ‘new normal’ even after lockdown measures are eased. Companies like Twitter, for example, have already announced that their employees can work from home indefinitely. Assuming that the growing trend towards permanent remote working continues, organisations will need to carefully consider the AI solutions they turn to for automating process-driven work. Yet, how does this affect the security considerations required to make remote working effective for companies? 

Every time we send a message to a colleague or share a company file, we share bits of data electronically. Data, of course, is the lifeblood of every modern organisation, so when it is shared, it must be done so securely. If we add AI into the mix, we run into a potential data security issue. 

This is because, fundamentally, AI needs data to work properly. Its purpose is to access data, analyse it and generate better outcomes for organisations through automation. In doing so, it replaces certain tasks, but also enables employees to perform existing jobs more effectively. 

Yet, despite the productivity gains, this new era of remote working doesn’t necessarily lend itself to AI being used appropriately when it comes to the secure transmission of data. AI-driven communications are just as vulnerable to security flaws as those performed by humans.  

This is especially evident when we look closely at the kind of technology enterprises use to communicate internally – a crucial component of any business model outside an office’s four walls. According to Morten Brøgger, CEO of messaging and collaboration platform Wire, AI may not be as intertwined with the future of work as we think: “AI and the future of work aren’t necessarily compatible. Definitely not in the collaboration and communication market or in the future of work 2.0.” 

Brøgger continues: “The reason is that if you start building a lot of AI into communication tools, it means surveillance for your users, which is a clear breach of their privacy. If you do need an AI, you need to have the data to examine behavioural patterns. This means breaking end-to-end encryption, because there will be a machine that receives a copy of everything a user is doing.” 

Wire is one of a crop of collaboration tools that have enabled many organisations to continue operating during COVID-19. It found its niche by specifically targeting large enterprises back in 2017, who Brøgger believes have an advanced understanding of the importance of security and privacy. With its leadership team comprising ex-Skype employees, the company now positions itself as being on a mission to change the way employees communicate in the workplace. 

Though organisations can safely integrate AI technology, this abrupt new iteration of the workplace that we’ve suddenly found ourselves in has perhaps gone some way to exposing why AI isn’t the panacea for everything - at least not without careful security planning at the outset. 

As Brøgger notes: “There were a lot of companies who were basically caught in this situation that weren’t ready for it. So, how do they put infrastructure in place that is sufficiently secure to allow people to work from home and work on things that are absolutely confidential? There are no longer any global rules – no one size fits all. That’s not how the world is, even with collaboration.”

It’s clear that the world of work is changing, though it took a pandemic to accelerate that change. Companies need to take stock of how they can make the most of the tools available to them to reduce inefficiencies, and I still maintain that AI is, in many ways, at the heart of that. But it must be done in a way that doesn’t come at the expense of breaching a company’s security and putting its most valuable data assets at risk.