Without interoperability, AI development opportunities will be limited to Big Tech, as they have access to the most data.
Big Tech companies’ lock in models are affecting the development of new technologies such as Artificial intelligence (AI). Smaller companies are limited and unable to compete with Big Tech’s data stores. Developers are also limited, tied to certain services and providers, like AWS, when the best options across the AI architecture may be produced by different companies.
GlobalData projects AI platforms will grow by 20.8% globally by 2024. In the Americas, IBM, Google, Microsoft, and AWS have a collective data and analytics market share of 23.1%.
Some initiatives have been piloted, with some Big Tech companies offering interoperability across AI frameworks, but this is still not commonplace.
Data interoperability is essential for AI development. Data interoperability would give smaller providers access to Big Tech’s data resources, creating a level playing field for development, which is essential in the absence of explicit AI regulation.
Interoperability between platforms would create a common understanding of AI, and flexible models across vendors, using the best tools across the AI architecture. This would provide better innovation and competition towards more advanced AI.
Limitations of lock in models
TensorFlow (Google) is one of the most popular AI frameworks. It is capable of high computational power; however, it lacks many of the pre-trained AI models. Similarly, AWS (Amazon) has a high level of security and extensive tools for data analysis but lacks flexibility with specific machine learning algorithms. Interoperability would allow ease of use, using the best framework features, or allow users to switch providers.
Some of the best AI developments have come from smaller companies, such as Nauto, a leader in fleet safety management, leveraging AI to prevent traffic collisions. Lock in model’s make it difficult to switch AI frameworks, or use different providers across the AI architecture, which could affect future development.
Current initiatives must go further
Interoperability creates mutually legible systems that can work together. Facebook and Microsoft launched the Open Neural Network Exchange (ONNX) in 2017, allowing learning models to be transferred between AI frameworks. Developers do not have to commit to a specific AI framework at the beginning of their research and are able to move between tools and choose different combinations. Caffe2, Pytorch (Facebook), and Cognitive Toolkit (Microsoft) are available under ONNX. Amazon and Google are absent from this initiative.
Amazon also piloted the Voice Interoperability Initiative. This ensures compatibility between systems, ensuring voice services work “seamlessly” with each other. Facebook, Garmin, and Xiaomi are members and Google remains absent.
AI architectures require interoperability for innovation
Mohammed Farooq, Chief Technology Officer and General Manager of Products at enterprise AI company Hypergiant, suggests an open AI services integration platform is the best option for developers. Hypergiant is launching a beta version of roadmap steps for an open AI services integration platform in Q1 2021. This would mean different solutions could be used in tandem, from Azure to MongoDB and AWS.
This provides a compatibility solution for AI developers. Farooq explains that this is particularly important for developing distributed AI architectures with multi-cloud infrastructure and edge ecosystems, requiring different service providers.
While private companies can offer a practical solution for distributed architectures , interoperability between Big Tech is essential for flexibility and transparency in the market, in the absence of specific regulation.