Watson Platform Expands with New APIs, Technologies and Tools

IBM expanded its array of APIs, technologies, and tools for developers who are creating products, services and applications embedded with Watson. Over the past 2 years, the Watson platform has evolved from one API and a limited set of application-specific deep Q&A capabilities to more than 25 APIs powered by over 50 technologies.

Watson represents a new model of computing—‘cognitive.’ It understands all types of data, it isn’t programmed, it learns. Watson has been put to work in diverse industries with clients and through an open developer platform. IBM continues to expand Watson’s abilities, enabling partners to bring their own creativity and aspirations as they build their businesses with cognitive.

The company announced a new Watson location in San Francisco. IBM will expand the company’s presence in Silicon Valley and the greater Bay Area with a new Watson Hub, South of Market (SoMa) in San Francisco. 

IBM also previewed new platform innovations and research projects that will extend its cognitive portfolio. New capabilities, offered through the Watson Developer Cloud, include advanced language, speech, vision services, and developer tools.

“Since introducing the Watson development platform, thousands of people have used these technologies in new and inventive ways, and many have done so without extensive experience as a coder or data scientist,” said Mike Rhodin, senior vice president of IBM Watson. “We believe that by opening Watson to all, and continuously expanding what it can do, we are democratizing the power of data, and with it innovation.”

New and expanded capabilities for developers include advances in services that enable cognitive applications to understand the ambiguities of natural language in text. For example, IBM Watson Natural Language Classifier enables developers to build products and applications that understand intent and meaning, finding answers for users even when questions are asked in varying ways and IBM Watson Dialog makes app interactions more natural by creating conversations tailored to the individual style a person uses to ask a question.

IBM is also making available the first set of developer tools that significantly reduce the time required to combine Watson APIs and data sets. The tools are intended to help make it easy to embed Watson APIs in any form factor from mobile devices, cloud services, and connected systems. IBM is also previewing IBM Watson Knowledge Studio where the company will open up its machine learning and text analytics capabilities in a single tool, making it simpler for line of business or general subject matter experts to use their own industry and organizational expertise to easily and rapidly train their cognitive applications.

IBM will expand the company’s presence in Silicon Valley and the greater Bay Area with a new Watson Hub, South of Market (SoMa) in San Francisco.

The location will also serve as the new global headquarters for *IBM Commerce, a high-growth industry opportunity for IBM and Watson. The teams there will collaborate to integrate Watson solutions with the company’s market leading Commerce portfolio for retailers and consumer products organizations. The facility will open in early 2016.

The APIs featured in the Watson Developer Cloud are drawn from breakthroughs from IBM Research and strategic acquisitions that include AlchemyAPI and Cognea. They are built from advances in Natural Language Processing, Deep Learning and Machine Learning, among other computer science disciplines including artificial intelligence.

IBM also announced the intention to create ‘industry cartridges’ that will allow businesses to quickly and easily draw upon industry specific data with unique taxonomies. In addition to the traditional forms of interactions with smart phones and tablets, the company is now working to integrate Watson into next generation robotics.

To learn more about the newest APIs, visit this post on the Watson Developer blog: