PostgreSQL, Ollama, and the DBeaver AI Assistant

Ollama is an open-source project that simplifies running large language models (LLMs). It runs locally on your machine, and you can choose from multiple LLMs. Keeping all the data on your machine should provide a security bonus.   

DBeaver is a universal database tool with an AI Assistant. This assistant provides an extra layer of security by allowing you to lock down what is shared, with the default being metadata only, no data.

Webinar

I recently presented a webinar on using the DBeaver AI Assistant. Several questions were raised about using Ollama, one of the many AI Engines available. But I had not tried it.

The first step is downloading Ollama for your platform (Linux, Windows, or Mac) and installing.  Once you have Ollama installed, you will need to pick a LLM.  I chose the Gemma3 and installed it using the command line client.

Adding the Gemma3 LLM to Ollama


DBeaver Configuration

Ollama listens on port 11434. I was able to load the Gemma3 model using the 'Load Model' button.

Sample Ollama Settings for Dbeaver EE


Example 1

I started with a simple address book type schema and asked about one last name - @ai who in the address_book has a name last name of Jones?


Finding the Jones

Example 2


The nest test was more of a conversation.

Using the AI Assistant in DBeaver to converse with Ollama

And the results:



Summary


DBeaver works well with Ollama.  To me, Ollama is many times slower than the paid AI engines. Now my laptop is not a superpower GPU-laden machine.  And maybe the Gemma3 engine is not the optimum for database work.  But it does work and is a good place for starting the exploration of your data with an AI.














Comments

Popular posts from this blog

Incremental Backups in PostgreSQL 17

Can Artificial Intelligence Created Better Tables Than You?

Saving The Output From psql