As a company, global access is one of AIKON’s top values, and a driving force with our work with blockchain. To this end, we’re excited to be partnering with a company across the world, whose work aligns squarely with our shared missions of decentralization and access. We think you’ll be interested in what they are building as well!
About LangNet: The Language Network (LangNet) is a collaborative effort to map the human language protocol on an open, decentralized platform for the development of language technologies. To this end, LangNet focuses on aggregating the fundamental resources and technologies necessary to build and host language AI systems.
As natural language technologies improve, human language will become a dominant form of human-to-machine interaction — just like Tony Stark and Jarvis in Ironman. (Yes, we went there.) We’re already seeing early forms of this technology in virtual assistants such as Amazon’s Alexa and the Google Assistant. Those who build these interactions will increasingly shape what we perceive and how we interact with the world.
LangNet believes there should be open, accessible resources and infrastructure for building these interactions. As we see with the Internet and open source software today, open infrastructure fosters competition and broad-based innovation, benefiting users with more choice and more of say over the direction of this technology.
Through blockchain and tokens we can coordinate development of this infrastructure in a decentralized way and experiment with new social compacts for value distribution.
The Language Network (LangNet) is a decentralized network of APIs to add language capabilities to any app or service.
The process of creating these APIs is fully decentralized and community driven, enabling LangNet to scale faster across more use cases than any centralized service or company.
About the API: Speech-to-Text [Korean & English]
LangNet’s Speech-to-Text API offers a speech-to-text service for English and Korean audio. The API is simple: you send an audio file and the service returns the transcribed text.
Underneath the hood: this API uses deep neural networks to convert spoken language to text. The audio clip’s waveform is analyzed according to its spectral density, and this information is sliced into very short time segments. A time delay neural network (TDNN) maps this audio information to a series of distinct sounds, or phones. A recurrent language model (RNN-LM), trained on millions of lines of text, then predicts the most probable sequence of words that the phone sequence represents.
Ready to try LangNet’s Speech-to-Text API? API.market provides free CPU to make your first API call and is currently rewarding early adopters of the platform with ORE tokens.
Connect with LangNet:
On the web: https://langnet.io/
Facebook — Korea: https://www.facebook.com/LangNet-Korea-584985658547802