The World Economic Forum recently confirmed its intention to develop global rules for AI and create an AI Council that will aim to find common ground on policy between nations on the potential of AI and other emerging technologies.
The trouble is that regulations designed to breathe life into the AI dream could in fact do the opposite, if not approached with care and due diligence. Regulatory compliance is important across the spectrum of business, from retail to banking, but it is also incredibly complicated, especially with new technologies being implemented into business models on an almost daily basis.
Due to constantly requiring updates and revisions, regulatory bodies’ ability to enforce such rules is becoming more challenging. A similar scenario can be said for AI. With the hype around AI reaching full throttle, measures are being taken to make its use fair and ethical.
However, the technology is constantly evolving, so these regulations are quickly becoming obsolete. This means that AI will never truly be regulated unless steps are taken to make sure that any iteration of it is covered.
The time to talk about AI regulation is now. However, talking about regulating AI as a technology would be detrimental to societal progression, and it would prove difficult for any government to stop its implementation. But regulations around its application could prove vital in the future.
Regulate the applications, not the technology
AI will continue to make drastic improvements and advancements across many sectors, which will have a profound impact on society.
For example, the use of AI within healthcare will see medical research and trials conclude quicker. In addition, transportation is set to change with self-driving vehicles and smart roads, which will contribute to creating smart cities. AI can also help to predict natural disasters, which will help to reduce those affected – which currently stands at 160 million people worldwide each year.
Real-time data can aid farming as well and address agricultural productivity to help provide for the growing population, and we are set to benefit economically as the use of AI within businesses evolves.
On the opposite end of the spectrum there is AI bias, accelerated hacking, and AI terrorism. This is where big challenges await both government institutions and legal organizations as they tackle the larger issues that can arise from misuse of the technology.
With AI predicted to be beneficial across a wide spectrum of different applications, it would make more sense for the applications themselves to be regulated. The application requirements for AI in healthcare are different from banking requirements, for example – the ethical, legal and economic issues around issuing medication to patients are far different from the issues involved with financial transactions.
On the other hand, regulating AI as a whole will mean that its use will be more limited in certain industries over others, which means that there would be barely any point in businesses implementing automation or AI into their business models.
Industry expert advice will be needed to regulate properly
In my opinion, in order to properly regulate, governments and policymakers will need to work closely with professional bodies from each industry which can advise the decision makers on policy and regulation on best practice with regard to; what the technology is needed for, how they’ll make it work, how it may impact their workforce, whether their workforce will need retraining, and what support they need from the government to ensure a smooth transition into an AI-based business.
Companies should be optimistic that the decision makers at the policy level will listen to their concerns and regulate the applications effectively, rather than narrowing usage by imposing a “blanket” regulation on the whole technology.
Protecting people from malicious intent
Another important factor that governments and businesses will need to be aware of will be in devising methods to prevent the rise of AI used with malicious intent, i.e. for hacking or fraudulent sales. Most cyber-experts predict that cyberattacks powered by AI will be one of the biggest challenges of the 2020s, which means that regulations and preventative measures should be implemented as with any other industry: designed specifically for the application.
Stringent qualification processes will also need to be addressed for certain industries. For example, Broadway show producers have been driving ticket sales through an automated chatbot, with the show Wicked boasting ROI increases of up to 700 percent. This has also allowed producers to sell tickets for 20 percent higher than the average weekly price.
Regulations will need to address the fact that AI and bots have the potential to take advantage of consumers’ wallets, which means that policymakers will need to work closely with firms that are gradually beginning to rely on chatbots to make sure that consumer rights are not being breached. This must be done while implementing strict qualification processes to make sure that chatbots and AI are thoroughly reviewed before being implemented into a business model.
2020 and beyond
As we plow ahead into the 2020s, the only way we can realistically see AI and automation take the world of business by storm is if they are smartly regulated. This begins with incentivizing further advancements and innovation to the technology, which means regulating applications rather than the technology itself.
Published November 10, 2019 — 17:00 UTC