NicoElNino - Fotolia

Standards for data sharing should guide AI government regulation

The White House has taken a deregulatory approach to AI and aims to inspire innovation. An expert weighs in on the role of government in AI and where the industry stands.

With a focus on deregulation and strategic investment, the White House hopes to promote growth in the AI industry through a lack of restrictions

The five pillars for advancing AI, according to the White House are to: promote sustained AI R&D investment; unleash federal AI resources; remove barriers to AI innovation; empower the American worker with AI-focused education and training opportunities; and promote an international environment that is supportive of American AI innovation and its responsible use.

Critics worry this approach will have the opposite effect and companies will be hesitant about developing technologies. Without knowing what will and will not be legal, companies may take a careful, slow approach, and others believe this will create a situation in which the advancement of technology outpaces the regulations in place to protect people.

Dr. Hossein Rahnama, MIT Machine Intelligence professor, founder and CEO of AI fintech Flybits, spoke on the presence of AI in the private sector and the importance of government in creating a regulatory landscape that can foster its growth.

How prevalent is AI and how can the government ensure AI global supremacy?

Hossein Rahnama: The foundation of machine learning -- statistical science and modeling -- has a strong history in the economy, public markets, hedge funds, defense and military. As science is becoming more interdisciplinary and post-secondary institutions are breaking their silos, we're seeing more of AI's impact in non-STEM verticals such as health, media and even education.

At the moment, there are very few AI products in repeatable or scalable use in the industry. There are lots of marketing activities and many data services and consulting practices that are trying to productize their offerings. Even solutions and platforms provided by big tech are so complex and expensive that it makes them a less-than-ideal candidate for scalable use. AI will reach mass adoption in industries when the privacy-adherent data sharing economy is better understood and standardized, as well as when tools become available that allow non-AI experts to develop intelligent systems powered by AI.

The American AI Initiative

The adoption of AI is well correlated with the increase in productivity. In the knowledge economy, data access and productivity gains are key. As a result, just as the steam engine innovation drove the previous industrial revolution, AI capabilities will be drivers of the next generation. However, a steam engine on its own is not that useful in the absence of rail networks and supply chain systems. As a result, countries that get the AI ecosystem (including data ecosystems) right, will gain supremacy.

How can the use of AI in the private sector be encouraged without federal oversight?

Rahnama: We can achieve this by placing as strong an emphasis on data sharing and the data economy, as we do on AI -- from the education system to corporate to the non-profit world. The mindset of [education and public AI] is 'I need the data before I write my thesis.' However, this mindset is not applicable to the post-graduate world and therefore, the market must learn how to access data while preserving its integrity, privacy and security.

How can government energize the AI industry and how crucial is its involvement?

Rahnama: AI without data access is worthless. There is a market dynamic where large companies want to solve pain points with AI, but due to lack of expertise, they want to rely on tech firms to achieve this. At the same time, they are rightfully very risk-averse when it comes to sharing data outside the organization. If government plays a role in setting up guides or standards and regulations for the data economy and data sharing -- similar to what Europe has done around GDPR, PSD2 and California's CCPA -- and also introduces incentive models for large companies to share their aggregated anonymous data, the AI economy can grow faster and more predictable.

Imagine a potential shipping transaction between two countries without a trade deal or formulation of medication without any FDA regulation or production of new material at scale without standard guidelines. Governments and even the UN should play a role in clearly defining frameworks for data sharing, governance and AI ethics. However, the government should not interfere in micro activities of implementation and protocols as long as these previous frameworks are met. It's the government that should set the framework and the market entrants who should adhere to them, not the other way around. Lack of regulations in AI will have catastrophic effects in the long run.

How do you see AI government regulation?

Rahnama: General framework for education, ethics, governance and scalability of not just AI but also data. The government should educate the public that there are several AI capabilities. Some are more prone to misuse than others. For example, facial recognition for neural networks are much more susceptible to misuse than a Bayesian filter that's used to assess the risk of a credit application. The public should understand these foundations and decide what data they'd like to share with the selected AI mode and inference capability.

Before regulating AI, governments should understand the premises of AI. Unlike many other industries, the AI domain is about uncertainty, non-trivial patterns and in many cases, surprises. Regulations governing [AI] cannot be based solely on deterministic, rule-based semantics. It has to also address the dynamic elements of AI as well.

Editor's note: This Q&A has been edited for clarity and conciseness.

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close