Sergey Nivens - Fotolia
After a long hibernation, artificial intelligence has awoken and seems energized to finally prove its value to businesses. One of the components underlying AI's resurgence is semantic technology, which helps users understand text, speech and relationships between data elements.
And it isn't just AI -- semantic methodologies also support a variety of other applications in big data environments.
The buzz: Like AI, semantic technology has hovered on the fringe of mainstream IT consciousness for years. It first came to life in 2001 under the banner of the Semantic Web, a concept based on the Resource Description Framework (RDF), which structures data in graph form. RDF has become a staple of semantic computing, along with the SPARQL query language and the Web Ontology Language. Now, these and other semantic tools are finding new footing in applications that parse speech, categorize questions and analyze sentiment. Uses include natural language processing, social networking, customer and healthcare analytics, and AI undertakings from Amazon's Alexa to IBM's Watson.
The reality: Enthusiasm for semantic technology could be dashed by conversational AI pratfalls on the part of chatbots and voice assistants. More broadly, it's hard to find programmers who are prepared to grapple with semantic-oriented systems. Also, semantic applications often depend on complex deployments of data lakes incorporating graph databases. And their ultimate success hinges on another AI-related technology -- deep learning algorithms that must process huge volumes of data to fuel semantic engines. With those challenges, the semantic vision may end up being a pipe dream.
See what a health system is doing with a semantic data lake
Learn how graph databases uncover key data relationships
Read up on potential enterprise uses of semantic technology
How Salesforce is upping its AI game with Conversational Queries