MENU

Open sourcing IBM’s Granite code models

Open sourcing IBM’s Granite code models

Technology News |
By Wisse Hettinga



The aim is to make coding as easy as possible — for as many developers as possible

From the IBM research website

IBM Research first started exploring whether AI could make it easier to develop and deploy code. In 2021, we unveiled CodeNet, a massive, high-quality dataset with 500 million lines of code in over 50 programming languages, as well as code snippets, code problems and descriptions. We saw the value that could be unlocked in building a dataset that could train future AI agents — the ones that we envisioned would translate code from legacy languages to those that power enterprise today. Others, we saw, would teach developers how to fix issues in their code, or even write code from basic instructions written in plain English.

Large language models (LLMs) trained on code are revolutionizing the software development process. Increasingly, code LLMs are being integrated into software development environments to improve the productivity of human programmers, and LLM-based agents showing promise in handling complex tasks autonomously. Realizing the full potential of code LLMs requires a wide range of capabilities, including code generation, fixing bugs, explaining and documenting code, maintaining repositories, and more. 

The tremendous potential with LLMs that emerged over the last few years fueled our desire to turn our vision into a reality. And that’s exactly what we’ve begun to do with the IBM watsonx Code Assistant (WCA) family of products, like WCA for Ansible Lightspeed for IT Automation, and WCA for IBM Z for application modernization. WCA for Z uses a combination of automated tooling and IBM’s own 20-billion parameter Granite large language code model which enterprises can use to transform monolithic COBOL applications into services optimized for IBM Z.

We’ve striven to find ways to make developers more productive, spending less of their time figuring out why their code won’t run, or how to get a legacy codebase to communicate to newer applications. And that’s why today we’re announcing that we’re open-sourcing four variations of the IBM Granite code model.  

We’re releasing a series of decoder-only Granite code models for code generative tasks, trained with code written in 116 programming languages. The Granite code models family consists of models ranging in size from 3 to 34 billion parameters, in both a base model and instruction-following model variants. These models have a range of uses, from complex application modernization tasks to on-device memory-constrained use cases.

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s