Unlocking the Potential of Generative AI in IoT


For the past year, my team has been exploring the value that generative AI can add to Internet of Things (IoT) platforms. The reason is simple: as IoT continues to evolve, addressing challenges related to mass connectivity and device management, the need for a forward-looking perspective on the future of IoT platforms becomes more apparent. Use cases like predictive maintenance indirectly highlight this need for advanced functionality. While device management remains fundamental, enabling such use cases requires practical capabilities such as machine learning and AI. With an increasing number of connected devices generating data daily, AI presents significant opportunities to deliver value. However, the adoption of machine learning and AI in this domain has been slow. Now, a new kid on the block has emerged—generative AI. Will generative AI be the breakthrough? Can it facilitate the adoption of ML and AI?

Generative AI refers to models that not only analyze existing content but also generate new content. While  text and image generators are the most common examples, generative models can also produce code, audio, video, and more. As IoT expands, generative AI could automate and enhance numerous processes. My team set out to determine whether this AI holds the key to IoT's future growth.

We researched generative AI by reading publications, undergoing training, and attending lectures. We then brainstormed use cases, categorizing them based on their value and implementation difficulty.

The use cases we selected were:

  • Chatbot for documentation and community articles
  • Low-code assistants
  • Automated data analysis
  • Contextual insights
  • Automated integration

We built initial prototypes using various popular AI platforms. However, we quickly realized that while these environments were rapidly improving, they currently lack robust functionality.

With the assistance of one vendors, we developed a chatbot prototype within weeks and demonstrated it at our International User Group in Budapest. Its ease and early promise identified it as a prime opportunity, leading us to prioritize having a production version by end of 2023. However, we've been unable to sufficiently improve answer quality for customers to push it to production. Specifically, we struggle with the chatbot's hallucination trait—its eagerness to provide a response even when lacking knowledge. Despite training it exclusively on our documentation, it would improperly respond to irrelevant questions.  While it could recognize questions outside its scope, such as,  when asked to order pizza, it would respond appropriately with, "Sorry I can’t help here, I am only trained on documentation.” But it often struggled when asked for an example about a fictional integration protocol, often providing inaccurate information. To be truly trustworthy for specialists, enabling them to make informed decisions, it should simply admit its lack of knowledge. However, it seems equally challenging for generative AI as it is for humans. This remains an unsolved issue hindering adoption.

Additionally, we found training models on our datasets to be too expensive, leading us to store documentation separately. This created two problems. First, phrasing became crucial. Asking about a "datalake" versus "data lake" yielded inconsistent results since the bot viewed "data" as the key term. Building extensive query mappings could help, but given our documentation scale, it would be extremely laborious.

Secondly, there was an issue with topic scattering. A concept such as white labeling spans multiple sections. The bot tended to prefer and repeat content from one section while ignoring relevant information in others. For example, during a test on the re-branding capabilities of the platform, the test team asked a question about how to alter the branding. It kept pointing to an irrelevant section on branding. We realized this occurred because most sections used "customize branding" rather than "alter branding," the latter being the phrase in the favored section. Only by standardizing terminology to "customize branding" did we achieve satisfactory responses.

In conclusion, generative AI solutions can easily impress, but getting it to bend to your will is hard. Meeting the 80/20 rule—quick prototyping giving way to far slower refinements to satisfy specialists—is essential. I believe generative AI will eventually become embedded within our tools, but reliable enterprise adoption remains challenging. Mastering subtleties like handling unknowns and scattered topics continues progressing slower than hoped. But the creativity and potential cannot be ignored. While it may not be imminent, the role of generative AI in IoT’s future, given the value it can bring, seems assured.



Newsletters

Subscribe to Big Data Quarterly E-Edition