Posts

Saving The Output From psql

  Occasionally, you will need to capture data when working with a database. You can cut-n-paste or use a shell command like script . In typical PostgreSQL fashion, psql  has two ways to save your output. The First Way You must invoke psql with the --log-file-<filename> or -L <filename> option. The entire session will be recorded. stoker@ThinkPad:~$ psql --log-file=/tmp/psqllog demo Password for user stoker: psql (17.4 (Ubuntu 17.4-1.pgdg24.04+2)) Type "help" for help. demo=# \d ledger                    Table "public.ledger"      Column     |  Type   | Collation | Nullable | Default ----------------+---------+-----------+----------+---------  id             | integer |           |          |  tx_id          | integer |        ...

PostgreSQL's COPY and \COPY

PostgreSQL is equivalent to a Swiss Army Knife in the database world. There are things in PostgreSQL that are very simple to use, while in another database, they take many more steps to accomplish. But sometimes, the knife has too many blades, which can cause confusion. This is one of those cases. COPY and \COPY I needed a quick dump of a table as part of an ETL (Extract, Transform, and Load - official definition, Exorcism, Trauma, and Lost-sleep - unofficially) to port some data. The COPY command is a quick way to output and write the data to a file.  The two copy commands below illustrate adding a delimiter and then writing the output to a table. demo=# select * from z;  a | b  |  c   ---+----+-----  1 | 10 | 100  2 | 20 | 200  3 | 30 | 300  4 | 40 | 400 (4 rows) demo=# copy z to stdout (DELIMITER ','); 1,10,100 2,20,200 3,30,300 4,40,400 demo=# copy z to '/tmp/z' (DELIMITER ','); COPY 4 demo=#  The trick is that  you must...

Can Artificial Intelligence Created Better Tables Than You?

 Artificial Intelligence is one of those conundrums where the ability to have some tasks handled for you contrasts with a mix of ego and pride that it may be better than you at those tasks.  I recently wrote a blog using another database about an AI-generated SQL that was quite sophisticated . But what about asking an AI to create a table with a specification like a DBA/SRE/Analyst might receive? I used Grok and entered the following prompt: Write the SQL to create a table on a PostgreSQL version 17 server to record data on customers. The data will include a first name, a last name, an address, a birthdate, and a unique UUID primary key. Partition the data by the year of birth. And produce some test data of at least ten records. I am sure many of us have started large projects given less instruction.  Notice: I did not denote the format for the address (US Post Office's format, or UK, or other). Nor did I set a name length. I wanted to see what assumptions were made by t...

Use PASSING with JSON_TABLE() To Make Calculations

I ran across this trick while preparing a talk for the Southern California Linux Expo  (SCaLE) on using JSON_TABLE(). This is a 'reserve' talk, which means it will presented IF another speaker drops out. But I thought this was interesting enough to share here. I will make sure the slides are posted online when finished. Passing Interest JSON_TABLE() temporarily converts JSON data into a relational table so that the temporary table can be processed by Structured Query Langauge (SQL) Commands. A feature in the PostgreSQL implementation is the passing clause.  In the example below, you will see PASSING 1500 AS price_limit in red. This declares price_limit as a variable and assigns a value of 1500 to it.  Later, in the body of the JSON_TABLE() call, you will see in green  is_too_expensive BOOLEAN EXISTS PATH '$.price ? (@ > $price_limit)' .  Here a variable is created named is_too_expensive of type BOOLEAN, which means we get a true or false answer from the r...

How PostgreSQL's Aggregate FILTER Will Spoil You

Using PostgreSQL will definitely spoil you. I recently had to use another database and found, to my chagrin, that aggregate filters were not included in their implementation of Structured Query Language. Specifically, there was no FILTER clause available. FILTER? To illustrate my dependency, let me start with some simple data. demo=# create table z (a int, b int, c int); CREATE TABLE demo=# insert into z values (1,10,100),(2,20,200),(3,30,300),(4,40,400); INSERT 0 4 demo=# select a,b,c from z;  a | b  |  c   ---+----+-----  1 | 10 | 100  2 | 20 | 200  3 | 30 | 300  4 | 40 | 400 (4 rows) demo=#  If I wanted the overall number of rows, I could use a simple COUNT(*). If I wanted an overall number of rows where one of the columns was of a certain value, I could add the logic for getting those rows to the WHERE clause. What if I wanted both?  Do I need two queries to do the work separately? Maybe a materialized view or a CTE (Common Tabl...

Constraint Checks And Dirty Data

 It is much easier to keep bad data out of your database than it is to have to correct it later. Those costs are measured in time, money, and sanity. One of the easier ways of protecting your data is with a constraint check .  I was recently approached by someone who urgently needed a constraint check and could afford to 'fix' the bad data after the constraint was in place. They had to quickly keep new 'bad' data out and could afford to correct the old 'bad' data at leisure.  The Overly Simplified Example To illustrate how to do just that, here is an example of a table and some simple data. demo=# create table x1 (a int); CREATE TABLE demo=# insert into x1 (a) values (10),(20),(30),(-5),(1); INSERT 0 5 demo=# Sometime later, it is discovered that any value of a needs to be greater than five. If we add the constraint, the system will balk and refuse the request. demo=# alter table x1 add constraint x_id_gt_5 check (a > 5); ERROR:  check constraint "x_id_...

PostgreSQL MERGE To Reconcile Cash Register Sales Data Part 2

Part one of this series covered a simple use of MERGE and can be found  here .  In Part Two, we get more complicated logically. Previously, we only had a WHEN NOT MATCHED logical leg in the code. Now, we add a WHEN MATCHED logic. WHEN MATCHED and WHEN NOT MATCHED The query now has two logical legs - when there are matching rows and when there are no matching rows.  I have color-coded the query to illustrate the separate paths. demo=# merge into ledger l using register r on l.id = r.id WHEN NOT MATCHED THEN  insert (id, tx_id, total_register, misc)  values (r.id, r.tx_id, r.amount, 0) WHEN MATCHED THEN  update set misc = misc + 1 RETURNING merge_action(), l.*; ~                                    merge_action | id | tx_id | total_register | misc  --------------+----+-------+----------------+------  UPDATE       |  1 |  ...