KNect365 Finance is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa

The risks of outsourcing: working with AI startups

As technology giants are competing with all other sectors for the talent and experience that will make AI a success, banks need to look to startups to join the long queue of sectors ready for an industry 4.0 update. Working with a 3rd party AI startup is a huge risk though, so here are all the cultural barriers, the implied risks, and the questions you should ask before partnering with an AI startup.

AI seems to be everywhere. It is near impossible to read the media without hearing about the transformative impact of AI on businesses. Gartner research predicts that enterprises will derive up to $3.9 trillion invalue from AI by 2022. From HR to finance to operations and sales and marketing, AI will help grow revenues, drive efficiencies and create deeper customer relationships. Chatbots will make the long wait to speak to a customer service representative something from a bygone era. Many repetitive and boring corporate jobs, such as data entry, quality assurance or candidate screening, will be automated.

But the AI industry is nascent and evolving very quickly with a shortage of expertise and experience. This means that many enterprises will have to partner and outsource their AI solutions to the thousands of new AI startups if they want a slice of that $3.9 trillion. But working with these startups is full of potential land mines, including technical, practical, legal, reputational and IP ownership risks. Many of these risks stem from cultural gaps so it’s imperative to understand them before enterprises and startups work together.

Move fast and break things

Startups have a culture that is often the anathema to corporate life. Silicon Valley popularised the notion of “move fast and break things” — mistakes will happen, live with it. We also hear how AI powered companies such as Uber are launching innovative consumer services by deliberating pushing on the boundaries of existing regulations. Entrepreneurs have been described as having unreasonable disregard for what can be reasonably done. They don’t like to be told no. They push to scale their businesses, really quickly. They hate bureaucracy. They want solutions now, not in days or weeks. They are creative in their marketing to close deals. This is the DNA of the entrepreneur and their startup.

As a result, young entrepreneurs who have little experience with corporate life, find selling and working with enterprises difficult. And not surprisingly, enterprises can find working with startups challenging.

These cultural differences often occur when there’s a gap in expectations. The startups might have stars in their eyes as they savour winning a brand name client, such as your company. Your brand will help validate their young endeavour. But does the startup understand how long decision making can take when multiple enterprise stakeholders, especially legal, are involved? Do they understand how demanding enterprises can be before agreeing to sign-off an AI prototype or deliverable? Do they know that it can take months to get data extracted from backend systems? Do they understand that doing a pilot AI project is no guarantee of rolling out the solution across an organisation? And do they understand invoices will be paid really slowly?

Cultural differences frequently lead to misunderstandings, tensions, and biases that can end in project failure, and even worse, the demise of the startup as they run out of cash trying to satisfy your needs. It is critical that enterprises are self-aware and understand the cultural differences before embarking on a relationship.

Working with AI startups is full of potential land mines

The challenges and risks of working with an AI startup are not only cultural they include:

  1. Technology and algorithmic risks — Much of today’s AI technology is relatively immature and there are risks that it might not work in the real world. We have seen customer service chatbot projects canned because the chatbot answered with gibberish when used with real customers. And just because an algorithm predicted the probability of consumer loan defaults at 90% with one client, it doesn’t mean it will have the same accuracy when trained on your data.
  2. Integration and implementation risk — AI startups are notoriously optimistic and often underestimate the time and cost to integrate and implement an AI solution. Proof of concepts can often can be hacked together quickly in a matter of a few months. But rolling this out across an organisation can be fraught with challenges, for example when integrating with an enterprise’s existing legacy systems, creating clean and labelled datasets, and working with existing processes. Some surveys suggest that implementing AI across an organisation is taking twice as long as anticipated by the startups.
  3. Future proof risk — AI is going through its gold rush moment with thousands of AI startups recently founded. However, if we fast forward a few years, history tells us that many of these young companies will fall by the wayside. And even if an AI startup flourishes, there is still no guarantee that they will have the technological capabilities you will want tomorrow.
  4. Legal and reputational risk — AI startups could be using technology, tools and data that puts your company at legal and reputational risk. Data privacy laws, including the recently introduced European GDPR, already require suppliers that process personal data — the fuel for many AI algorithms — to follow appropriate technical and organisational measures to ensure the information is secure. Under GDPR, there are also requirements that any automated decisions with legal effect — such as an AI system that determines who qualifies for a loan or a job – are “transparent” and “explainable”. Similarly, there are brand reputational risks if a company’s use of AI is seen as biased against certain demographics. We have seen much criticism of facial recognition technology that is better at recognising the gender of white males than females and ethnic groups.
  5. Intellectual property risk — Many startups will argue that the algorithms that they deliver are that much smarter because they are trained on datasets from a wide variety of clients. But if your data is a strategic asset, such as an insurance company with claims history of millions of customers, you might not want it to be used for the benefit of your competitors. There is a trade-off to be made. Similarly, you might not want your AI solution’s software code to be shared with other clients of the startup.

The key success factors for successful collaboration

Working with an AI startup is likely a necessity at this early stage of the industry’s evolution for most enterprises. But navigating the wealth of AI startups to identify players that will be around tomorrow and share the same destination is difficult. During the evaluation of prospective AI vendors, make sure you ask the following questions:

  1. Cultural fit — does the startup have demonstrable experience working with complex enterprises? Do they have realistic expectations of the relationship?
  2. Technology and algorithmic efficacy benchmarks — can the startup explain and demonstrate the effectiveness and limitations of its technology and algorithms? Can they explain how effective the solution will be with your data? How long will it take to train the AI on your data? And how long will it take to integrate their solution with your systems, data, and processes?
  3. Product roadmap — is the startup product roadmap aligned with your future needs? Is the product compatible with your technology stack?
  4. Financial health — does the startup demonstrate customer and revenue growth along with strong financial backing from leading venture capitalists?
  5. Responsible AI — does the startup have responsible AI principles that they document, explain and follow? Can they help you understand the legal and reputational risks of their AI solution? Do they follow principles of transparency and explainability of algorithms? Do they know the provenance of the data used in their system and also the risks of data sample bias?
  6. IP ownership — does the startup own the IP or is it with the client?

AI is relatively new but we are now starting to see AI procurement frameworks to help guide AI vendor selection and management. For example the World Economic Forum is partnering with the British Government to develop such a framework.

All in all, the most important thing to understand is that most challenges of AI projects come down to human factors. Relationships often end up as they start out so make sure you are on the same page early with your startup and commit to communicate clearly and frequently your expectations and needs. But the reality is that we all need to find a way to make these relationships work as enterprises and AI startups need each other.

Find out more about FinTech startups at RiskMinds Asia.

Closing banner for blogs RM Asia

About Simon Greenman

Simon Greenman is a partner at Best Practice AI — an AI Management Consultancy that helps companies create competitive advantage with AI. Simon is on the World Economic Forum’s Global AI Council; an AI Expert in Residence at Seedcamp; and Co-Chairs the Harvard Business School Alumni Angels of London. He has twenty years of leadership of digital transformations across Europe and the US. Please get in touch by emailing him directly or find him on LinkedIn or Twitter or follow him on Medium.

This article was originally published in our latest eMagazine: A new dawn for risk management.

Get articles like this by email