Cohere announced the general availability of its natural language processing (NLP) platform.
It built an easy-to-deploy collection of APIs and tools designed for developers who want to create websites and apps that can read, write, and understand human language. Cohere believes that broadening access to large language models (LLMs) will reduce the barriers to developing powerful product experiences rooted in language, shaping the future of how we interact with technology.
Created for developers by developers, the Cohere platform is easily deployed with a few lines of code and is designed to make it easy to experiment, customise, and deploy NLP technology into your stack.
They are currently offering new users free credits for up to 300 million characters, usable over their first three months on the Cohere platform.
Also Read: Let Augmented Analytics Do Your Heavy Lifting
Start building today
The Cohere platform provides developers access to large language models that read millions of web pages to understand the meaning, sentiment, and tone of the words we use. Billions of parameters and proprietary training techniques power Cohere’s models to outperform other commercially available models.
The versatile NLP platform offers two types of models that can both read and write text: generation, which can generate text summaries, descriptions, blog posts, and metadata extraction from unstructured documents; and representation, which can be used to classify and compare text, powering applications like semantic search, chatbots, sentiment analysis, and identifying toxic posts online.
Both models are available in multiple sizes and can be fine-tuned to meet the performance and latency needs. Cohere’s generation models can be fine-tuned to understand the niche domain or reflect your preferred writing style and formatting. It will soon provide fine-tuning for representation models, powering customisation of word embeddings.
Cohere has also defined usage guidelines to mitigate harm and promote responsible use of our platform. “We’re also actively collaborating with the broader research community to prevent harm and malicious use,” it said.