Writing for Payment Expert, Ingmar Krusch, Chief Information Officer at Solaris Group, dissects the approach that needs to be taken when it comes to AI, as the technology continues to evolve further into the mainstream. 

By now we have all heard about artificial intelligence’s (AI’s) potential to revolutionise the way we work – we read about it in headlines every day. From optimising software delivery to accelerating data analysis, generative AI can help take mundane tasks off our plates so staff can focus on more strategic, creative work. 

However, generative AI in its current form is no expert. It cannot yet reason deeply, nor does it have a comprehension of the world itself. It’s more like a very smart intern that has a lot of knowledge but little real-world experience, and thus sometimes misjudges or just draws “obvious” wrong conclusions. 

Generative AI is a capable learner and eagerly absorbs knowledge from its mentors, offering organisations a valuable resource to help ease employee workloads. But just as you would never send out an intern’s work without checking it first, AI needs human supervision, training, guidance, and constant input to stamp out inaccuracies and add perspective that comes from real experience. 

Therefore, treating AI like an intern can unlock the full potential of this technology, while ensuring work remains of a high standard, and important data is protected. So how can firms shift their mindset to boost efficiencies and innovation, without undue time spent on this new trainee? After all, embracing AI can backfire if not done carefully.

Generative AI – the research assistant of the future?

Just like an intern can support experienced staff with time-consuming and repetitive tasks, generative AI can quickly create detailed reports and summaries, offer cross-references, and streamline complex processes when given the right guidance. AI’s ability to instantly sift through vast amounts of data is a game-changing resource across industries, even though the results may not be perfect the first time around. 

Take the financial services industry, for example. Financial institutions sit on a wealth of data, which, when leveraged properly, can provide a treasure trove of information to help create tailored products and services that suit the unique needs of their customers. Yet the sheer volume of data that must be analysed to make product delivery as customer centric as possible leaves many financial institutions with data blind spots, struggling to gain a clear understanding of their customers. 

AI can help tackle this large volume of data, for example, quickly spotting patterns in spending habits that will help them to gain a deeper understanding of their customers’ behaviour and financial goals. However, product managers must remain in the loop to oversee the work of AI and make sure that no inaccuracies creep in.

Speeding up big data analysis with AI can also accelerate KYC checks. Firms can use AI tools to model how other teams have integrated a KYC partner, as well as helping to monitor transactions for any suspicious activity. For instance, Solairs has created an internal taskforce which explored how Microsoft Co-Pilot – a tool that uses the power of large language models and data in the Microsoft 365 apps to automate tasks like drafting emails or PowerPoint slides – could boost the productivity of its product development teams, while also retaining the importance of human review in AI.   

Injecting creativity into the workplace

Besides the practical benefits that AI can bring to day-to-day work, changing teams’ mindsets to think of AI as an intern empowers them to tap into the unlocked potential of this technology. Eliminating mundane tasks frees up time for more creative thinking, experimentation, and continuous improvement, while also retaining an emphasis on fact-checking and human responsibility. 

The “no-code” aspect of AI also encourages a wider pool of individuals to engage with AI. This low barrier to entry enables staff to bring their ideas to life through AI and encourages more people to get involved with innovative projects. This helps to foster a culture of creativity and boost strategic thinking in the workplace.

AI must be kept on a leash

But as the headlines remind us every day, implementing AI is not all sunshine and rainbows – it must be approached with caution. Firms must employ the appropriate guardrails and fact-checking mechanisms to ensure that AI is unbiased and compliant. Keeping humans in the loop is critical.

Think of an intern working in a bank’s product department, for instance. If a software engineer in the team asked them to write a few lines of code, it is the responsibility of the software engineer to teach, supervise, and check for any mistakes; the buck stops with them. They should also ask a second and third person to review the code to ensure that no errors slip through the gaps. If mistakes are found after this review process, there are three people on the hook, and the intern – or the AI – is not one of those. 

It is also critical that firms understand and check for any intellectual property (IP) infringement as AI tools are trained on and collect data from several sources, which can expose them to potential GDPR and copyright risks. Selecting an AI vendor that is designed to be secure and sees data protection as a competitive advantage will enable firms to build a trusted, reliable AI system. 

AI as a learning partner

We are already seeing leaps and bounds being taken with AI in the working world. Its speed, adaptability, and ability to learn mean that staff around the world are already delegating manual tasks to AI “interns”, removing the administrative burden from employees’ shoulders and allowing them to spend more time innovating.  

But just as interns can gain valuable experience through exposure and guidance, AI needs the right supervision to maximise its potential. So, before firms offer generative AI a full-time job in their company, they must ensure that humans and AI are working collaboratively to drive efficiencies in the workplace and support staff with their day-to-day work. AI is a tool and should remain so – it must not be put in the driver’s seat.