SQLAlchemy: Prevent implicit cross join (cartesian product)

Posted by Moser on 02 Jan 2020

I really like to create SQL queries using SQLAlchemy’s explicit and declarative API. When using this instead of raw strings, roughly half of errors I tend to introduce are caught before even sending the query.

Here is an example:

import sqlalchemy as sa

metadata = sa.MetaData()
a = sa.Table(
    sa.Column("a_id", sa.Integer, primary_key=True, autoincrement=True),
    sa.Column("name", sa.String),
b = sa.Table(
    sa.Column("b_id", sa.Integer, primary_key=True, autoincrement=True),
    sa.Column("a_id", sa.Integer, sa.ForeignKey(a.c.a_id)),

def create_select(additional_filters):
    return sa.select([a], whereclause=sa.and_(*additional_filters))

print(create_select([a.c.name == 'Foo']))
# SELECT a.a_id, a.name
# FROM a
# WHERE a.name = :name_1

A huge advantage over dealing with string queries is that you can create different parts of the query on their own and combine them. In the example I create the SELECT statements in a central place and allow to pass in parts of the where clause.

Unfortunately, this pattern has a dangerous property: It will implicitly add a CROSS JOIN (or cartesian product) when you pass a filter expression that contains a column from a table that is not part of the select statement already.

print(create_select([a.c.name == 'Foo', b.c.b_id == 1]))
# SELECT a.a_id, a.name
# FROM a, b
# WHERE a.name = :name_1 AND b.b_id = :b_id_1

On small tables this will just create an unexpected result set, but when the involved tables are large this query might well exhaust the DB server’s resources. The concrete problem can be fixed by adding a join to all the tables that should be allowed in the filters:

def create_select_corrected(additional_filters):
    return sa.select([a], from_obj=a.join(b), whereclause=sa.and_(*additional_filters))
print(create_select_corrected([a.c.name == 'Foo', b.c.b_id == 1]))
# SELECT a.a_id, a.name
# FROM a JOIN b ON a.a_id = b.a_id
# WHERE a.name = :name_1 AND b.b_id = :b_id_1

A more abstract problem is that we can create queries that will lead to unexpected results without noticing. A good but pretty expensive solution would be to validate the parts that are passed in. Depending on how variable the input can be, I would go for this solution. But in other cases it’s just myself using my query construction logic in new ways. So I prefer a cheaper solution that makes me aware of the problem.

Update 2020-05-26: SQLAlchemy 1.4

Starting with version 1.4 SQLAlchemy provides a better and safer way to discover the problem. It will issue a warning whenever the compiled SQL contains a cartesian product.

Old solution (still necessary for SQLAlchemy < 1.4)

Thankfully, SQLAlchemy’s events allow us to be notified when a query is about to be executed. We can create a listener that raises an exception when we try to run a problematic query.*

def before_execute(conn, clauseelement, multiparams, params):
    if (
        isinstance(clauseelement, sa.sql.selectable.Select)
        and len(clauseelement.froms) > 1
        raise RuntimeError("Cross join detected:\n{}".format(clauseelement))

sa.event.listen(engine, "before_execute", before_execute)

It’s not perfect because it does not check subqueries or CTEs but it gives us a line of defense. I am also thinking of adding an assertion that checks for the problem to the tests that exercise the query construction logic.

Here is the complete example code: sqlalchemy_implicit_cross_join.py.

pyplot: Histogram on X/Y axis

Posted by Moser on 23 Mar 2018

Last week a colleague asked me for help on a figure he wanted to plot using pyplot. He had a couple of line plots and an intersecting line and wanted to show the histogram of the intersection points on both the X and Y axis.

I found a neat way of acomplishing such a plot using a GridSpec.

Here is the code, using a regular scatter plot as the central element, but as every part of the figure is filled separately you can swap any of them out. You have to make sure though that the histogram plots and the main plot have the same X and Y axes respectively.

I remove the axes from the main plot (set_axis_off()) because they looked redundant. tight_layout is used to remove the gaps between the subplots. You can tweak width_ratios and height_ratios to change how much space is allocated to the histograms and the main plot.

PostgreSQL: JSON aggregation

Posted by Moser on 17 Mar 2018

On client project I have to do a search in a structure of tables and return results including some child objects. A classical instance of the N+1 query problem, if I was using an ORM. I decided not to use the ORM for the feature because it will be one of the hottest paths of the application and I wanted more control over the queries (rather complex logic with LIKE & sorting). But the filtering/sorting is not the topic today, so I will leave it out in the examples.

For illustration, let’s assume the following schema:

CREATE TABLE parents (
  name VARCHAR

CREATE TABLE children (
  name VARCHAR,
  birthdate DATE,
  parent_id INTEGER,
  FOREIGN KEY (parent_id) REFERENCES parents

Without any further thinking I would do a query like this and a bit of code that constructs parent objects with their respective children.

SELECT * FROM parents JOIN children USING (parent_id) ORDER BY parent_id;
 parent_id | name  | child_id |  name   | birthdate  
         1 | Jim   |        1 | Tamara  | 2017-02-01
         1 | Jim   |        3 | Tom     | 2005-10-01
         1 | Jim   |        5 | Tonja   | 2011-07-17
         2 | Jenny |        2 | Tim     | 2000-11-02
         2 | Jenny |        4 | Theresa | 2017-04-30

Just before I started writing the logic to pull apart the result and put it into it’s object structure, I thought, “It would be nice to let the database put together all the children of one parent into an array.”

I already knew array_agg which aggregates all values into an array. After some reading I discovered json_build_object which takes a sequence of keys and values and creates a JSON object from it.

So my final query looked like this:

  , p.name
  , array_agg(json_build_object(
      'child_id', c.child_id
    , 'name', c.name
    , 'birthdate', c.birthdate
  )) as children
FROM parents p
JOIN children c USING (parent_id)
GROUP BY 1, 2;
 parent_id | name  |                                                                                                             children                                                                                                              
         1 | Jim   | {"{\"child_id\" : 1, \"name\" : \"Tamara\", \"birthdate\" : \"2017-02-01\"}","{\"child_id\" : 3, \"name\" : \"Tom\", \"birthdate\" : \"2005-10-01\"}","{\"child_id\" : 5, \"name\" : \"Tonja\", \"birthdate\" : \"2011-07-17\"}"}
         2 | Jenny | {"{\"child_id\" : 2, \"name\" : \"Tim\", \"birthdate\" : \"2000-11-02\"}","{\"child_id\" : 4, \"name\" : \"Theresa\", \"birthdate\" : \"2017-04-30\"}"}

When the query is executed with sqlalchemy, children in the resulting rows is already correctly typed as a list of dictionaries.

The explain analyze output for both queries (on a trivially small test set) shows that the aggregated version is 50-100% slower (150-200μs vs. 250-350μs), but I guess that will rarely be a real problem because - at least in my case

  • the execution time of my query is dominated by filtering/sorting the parent rows.

If you would like to play with the example yourself, get the SQL file here.

Python: bare except statement - why it is bad.

Posted by Moser on 20 Sep 2017

The bare except statement is something I often see in the main loop of a script that should run indefinitely. Pylint warns you about this problem (W0702: No exception type(s) specified (bare-except)), but in my opinion we should start to treat this as an error like flake8 does (E722 do not use bare 'except').

Consider the following code snippet:

import sys
import time

def risky_business():

    print("Problems?", sys.exc_info())

This will work nicely to keep your program running whatever happends inside you risky_business function. But let’s try to stop your program by pressing CTRL-C.

$ python a.py 
^C('Problems?', (<type 'exceptions.KeyboardInterrupt'>, KeyboardInterrupt(), <traceback object at 0x7f234f6c9dd0>))

We caught the KeyboardInterrupt exception and just carried on. But we catch more which we should not. Let’s try to exit the program from the business function, maybe because the user asked for it or we got signaled.

def risky_business():
$ python a.py 
('Problems?', (<type 'exceptions.SystemExit'>, SystemExit(0,), <traceback object at 0x7f49b7ab5e18>))
('Problems?', (<type 'exceptions.SystemExit'>, SystemExit(0,), <traceback object at 0x7f49b7ab5ea8>))

So why is this happening? Both KeyboardInterrupt and SystemExit are exceptions (derived from BaseException). An except statement without any restriction will catch any exception derived from BaseException. Here is the complete hierachy of the builtin exceptions.

We can fix our problem by restricting the kind of exception that we want to catch:

def risky_business():

  except Exception:
    print("Problems?", sys.exc_info())

When we press CTRL-C now, the program exits with the right exception.

$ python a.py
^CTraceback (most recent call last):
  File "a.py", line 10, in <module>
  File "a.py", line 6, in risky_business

Pylint will still warn you about this line, because catching Exception is still very broad and you should only do this when you have a good reason, e.g. the reason mentioned above (to keep a service running even if it could not handle a certain input or request).

So next time you see a bare except statement in your code, take a moment to reconsider if you really want to catch SystemExit, KeyboardInterrupt (and GeneratorExit which I did not mention above). If that is the case, you might want to make it explicit so that the next person reading the code does not have to check again.

Python: Minimum element by attribute

Posted by Moser on 02 Mar 2017

Suppose you want an element from a list that is minimal in a certain respect. The code you usually see for this is:

import collections
Item = collections.namedtuple("Item", "id cost")
items = [Item(1, 10.00), Item(2, 200), Item(3, 1.0)]

sorted(items, key=lambda item: item.cost)[0]
# Item(id=3, cost=1.0) 

Here is a nicer way to do this using the standard library:

min(items, key=lambda item: item.cost)
# Item(id=3, cost=1.0) 
# or even
import operator as op
min(items, key=op.attrgetter("cost"))
# Item(id=3, cost=1.0) 

This is not only more elegant, but also more efficient because sorting a list is more time consuming than finding the minimal element. When dealing with generators this method uses a lot less memory.

Solving the Minotauros Cube with Haskell

Posted by Moser on 22 Sep 2015

tl;dr: Puzzle had to be solved. Did it in Haskell. Had fun. Code is here :-)

Last week one of my colleagues, Marco, brought a so-called Minotauros cube to work and left it on the counter near the coffee machine. You might call it an obvious attack on the productivity of the whole company.

At least for me it is hard to tolerate to be unable to solve a puzzle. Unfortunately, it is a quite tricky one. Here is a picture; there is only one way of putting the parts back into a cubic shape.

Minotauros cube

While I am sure that there are solutions and algorithms to solve this kind of puzzle, I wanted to think of something myself. Thus, I abstained from searching and started to think about a way to solve it. I immediately decided to build the solution in Haskell. I have done quite a bit of programming in Haskell in my first year of university but not much recently. But at Polyconf 2015 I got some inspiration to do more functional programming again (I am playing with elm (talk at Polyconf) at the moment, more on that in another post).

It took me some time and fruitless tries experiments until I had a good idea how to structure the solution of the problem. I decided to use a backtracking algorithm which tries to find dead ends as quickly as possible.

01: function solve (shapes, solution)
02:   shape, rest = head, tail of shapes 
03:   for shape' in all rotations of shape
04:     for shape'' in all possible positions of shape'
05:       solution' := solution ++ shape''
06:       if solution' is valid
07:         solution'' := solve (res, solution')
08:         if solution'' is valid
09:           return solution''
10:   return invalid solution

11: solve(all shapes, empty solution)

During the implementation I decided to change the algorithm slightly: It now starts out with a full solution (a shape in which all the coordinates are “occupied”) and consequently “subtracts” instead of “adds” in line 05.

Another problem was the data structure to represent shapes. My first try featured the following type using Data.Array:

import Data.Array
type Shape = Array (Int, Int, Int) Int

It is a quite obvious choice for somebody who lives most of the day in the imperative world, but I also noticed quite quickly that the Array type is a little cumbersome to work with.

I then reconsidered what a truly functional way of representing spatial data would be. I came up with the following:

data Shape = None
           | Point Shape Shape Shape Shape Shape Shape
           --      North South West  East  Above Below

I planned to use this as a three-dimensional doubly linked list. I thought it might be nice to traverse this using pattern matching. While Haskell’s laziness enables us to do so I quickly noticed that using this kind of structure is too much for my mind.

I finally resorted to something easier:

import qualified Data.Set as Set
type Point = (Int, Int, Int)
type Shape = Set.Set Point

As soon as I had made up my mind about the data structure it took me a couple of evenings to create all the necessary functions. The full implementation can be found in this gist.

This experiment and also my recent experience with elm has motivated me to do more strongly typed functional programming again. The power of those compilers is sometimes just incredible. ;-)