Software

AI + ML

Fujitsu picks model-maker Cohere as its partner for the rapid LLM-development dance

Will become exclusive route to market for joint projects


Fujitsu has made a "significant investment" in Toronto-based Cohere, a developer of large language models and associated tech, and will bring the five-year-old startup's wares to the world.

The relationship has four elements, one of which will see the two work on a Japanese-language LLM that's been given the working title Takane. Fujitsu will offer Takane to its Japanese clients. Takane will be based on Cohere's latest LLM, Command R+, which we're told features "enhanced retrieval-augmented generation capabilities to mitigate hallucinations."

The duo will also build models "to serve the needs of global businesses."

The third element of the relationship will see Fujitsu appointed the exclusive provider of jointly developed services. The pair envisage those services as private cloud deployments "to serve organizations in highly regulated industries including financial institutions, the public sector, and R&D units."

The fourth and final element of the deal will see Takane integrated with Fujitsu's generative AI amalgamation technology – a service that selects, and if necessary combines, models to get the best tools for particular jobs.

It's 2024, so no IT services provider can afford not to be developing generative AI assets and partnerships. To do otherwise it to risk missing out on the chance of winning business in the hottest new enterprise workload for years, and thereby forgetting the time-honored enterprise sales tactic of "land and expand." At worst – if things go pear-shaped – they end up as a siloed app that becomes legacy tech and can be milked for years.

This deal is notable, given the likes of OpenAI, Mistral AI, and Anthropic are seen as the LLM market leaders worthy of ring-kissing by global tech players.

By partnering with Canadian Cohere, Fujitsu has taken a different path – and perhaps differentiated itself.

Cohere is not, however, a totally left-field choice. Nvidia and Cisco have invested in the biz, and its models are sufficiently well regarded and in demand that AWS, Microsoft and HuggingFace have all included its wares in their ModelMarts. ®

Send us news
4 Comments

AI-pushing Adobe says AI-shy office workers will love AI if it saves them time

knowledge workers, overwhelmed by knowledge tasks? We know what you need

Canadian artist wants Anthropic AI lawsuit corrected

Tim Boucher objects to the mischaracterization of his work in authors' copyright claim

Buying a PC for local AI? These are the specs that actually matter

If you guessed TOPS and FLOPS, that's only half right

AI firms propose 'personhood credentials' … to fight AI

It's going to take more than CAPTCHA to prove you're real

Cerebras gives waferscale chips inferencing twist, claims 1,800 token per sec generation rates

Faster than you can read? More like blink and you'll miss the hallucination

The future of AI/ML depends on the reality of today – and it's not pretty

The return of Windows Recall is more than a bad flashback

A quick guide to tool-calling in large language models

A few lines of Python is all it takes to get a model to use a calculator or even automate your hypervisor

Dell's all-in bet on AI pays off in latest earnings

The term was mentioned over 140 times during the earnings call

Benchmarks show even an old Nvidia RTX 3090 is enough to serve LLMs to thousands

For 100 concurrent users, the card delivered 12.88 tokens per second—just slightly faster than average human reading speed

Brit teachers are getting AI sidekicks to help with marking and lesson plans

Isn't the education system in enough trouble already?

Fintech outfit Klarna swaps humans for AI by not replacing departing workers

Insists it's not cutting jobs and pays harder-to-automate people more with AI savings

If every PC is going to be an AI PC, they better be as good at all the things trad PCs can do

Microsoft's Copilot+ machines suck at one of computing's oldest use cases