The topic of Artificial Intelligence (AI) couldn’t be hotter, with the advent of tools such as ChatGPT and Midjourney posing very real questions of what AI means for the future of humanity. 35% of businesses globally have stated they currently use AI, with 42% stating they plan to use it at some point in the future. With this in mind, the tech experts over at our friends SOAX have looked at five ways AI could be putting your business under threat.
Accuracy and accountability
One of AI’s biggest problems, especially Chatbot platforms such as ChatGPT, is the sourcing, accuracy and accountability of the information it provides. The question of where AI gets its information from is a big one, as there is no transparency with these platforms and the way they source information. It’s extremely difficult to verify the information provided by AI, and sometimes it can be completely impossible.
Does this mean the AI has made up its own information? Not necessarily, but it’s a real possibility. Fake information provided by AI is referred to as ‘hallucinations’ and they’re not uncommon. For example, ChatGPT once provided lawyers with completely fictional court cases when being used for legal research for a case. This is one case that proves AI hallucinations can have serious implications in the real world and are unreliable sources of information for businesses.
Skills gap
As more businesses adopt AI, they should question whether they have the skills and capabilities to do so sensibly and efficiently. With the threat of mistakes, misinformation and hallucinations, things that can do serious harm and have huge implications as previously demonstrated, it’s unlikely organizations have the expertise to use the technology to its full and safe potential.
AI comes with risks like data challenges and technology issues. The most important thing in AI is data – how it’s collected, stored, and used matters a lot. Without understanding this well, organizations face many risks, like damage to their reputation, problems with their data, and security issues.
Copyright and legal risks
If a real person does something wrong and breaks the law, they are held accountable for their actions through the rule of law wherever they are. What happens if AI breaks the law? This creates a host of legal issues for things AI might output.
As mentioned previously, identifying the source of AI’s data or the origin of its error is extremely difficult, and this causes a host of legal issues. If an AI uses data models that are taken from intellectual property, such as software, art and music, who owns the intellectual property?
If Google is used to search for something, typically it can return a link to the source or the originator of the IP – this is not the case with AI. Not only this, but there’s also a plethora of other issues including data privacy, data bias, discrimination, security, and ethics. Deep fakes have also been a huge concern lately, who owns a deep fake of yourself? You, or the creator. It’s a completely gray area that is too early in its lifespan for any regulation or concrete law, so businesses must consider this when using AI.
Picture a large company where AI tools are being implemented by various employees and departments. This situation poses significant legal and liability concerns, prompting numerous companies, including industry giants like Apple and JPMorgan Chase, to prohibit the use of AI tools such as ChatGPT.
Costs
Every bit of technology is ultimately assessed by its financial return on investment (ROI), and some technology is produced with promise and potential, but ultimately fails because of the high costs they produce. Take Google Glass or Segways, tech that was promising at the time of invention, but never lived up to the expected market gain.
The use of AI is becoming huge, with companies investing large amounts of money into it. For example, Accenture is investing $3 billion into AI, and big cloud providers are spending tens of billions on new AI infrastructure. This means that many companies will need to spend huge amounts of money training their staff and utilizing the newest AI technologies, and without an ROI, that isn’t a sustainable or effective move for a business. The huge amounts of investment needed could pay off in the long run, but it’s certainly not a guarantee. A study by Boston Consulting Group found that just 11% of businesses will see a significant return on investment in AI.
Data privacy
No matter how it’s used, anybody’s personal data is subject to standard data protection laws. This includes any data collected for the purposes of training an AI model, which can easily end up becoming extremely extensive.
The general advice to organizations is to carry out a data protection impact assessment to gain the consent of data subjects; to be prepared to explain their use of personal data; and to collect no more than is necessary. Importantly, procuring an AI system from a third party does not absolve a business from responsibility for complying with data protection laws.
Last year, video platform Vimeo agreed to pay $2.25m to some of its users in a lawsuit for collecting and storing their facial biometrics without their knowledge. The company had been using the data to train an AI to classify images for storage and insisted that “determining whether an area represents a human face or a volleyball does not equate to ‘facial recognition’”.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW