Plain SQL Queries
Sometimes you may need to write your own SQL code for an operation which is not well supported at a higher level of abstraction. Instead of falling back to the low level of JDBC, you can use Slick’s Plain SQL queries with a much nicer Scala-based API.
This chapter is based on the Plain SQL Queries sample which provides a ready-to-run app to demonstrate the features.
Scaffolding
The database connection is opened in the usual way. All Plain SQL queries result in a DBIOAction
that can be composed and run like any other action.
String Interpolation
Plain SQL queries in Slick are built via string interpolation using the sql
, sqlu
and tsql
interpolators. They are available through the standard api._
import from a Slick profile:
sourceimport slick.jdbc.H2Profile.api.*
You can see the simplest use case in the following methods where the sqlu
interpolator is used with a literal SQL string:
sourcedef createCoffees: DBIO[Int] =
sqlu"""CREATE TABLE coffees(
name VARCHAR NOT NULL,
sup_id INT NOT NULL,
price DOUBLE NOT NULL,
sales INT NOT NULL,
total INT NOT NULL,
FOREIGN KEY(sup_id) REFERENCES suppliers(id))"""
def createSuppliers: DBIO[Int] =
sqlu"""CREATE TABLE suppliers(
id INT NOT NULL PRIMARY KEY,
name VARCHAR NOT NULL,
street VARCHAR NOT NULL,
city VARCHAR NOT NULL,
state VARCHAR NOT NULL,
zip VARCHAR NOT NULL)"""
def insertSuppliers: DBIO[Unit] = DBIO.seq(
// Insert some suppliers
sqlu"INSERT INTO suppliers VALUES(101, 'Acme, Inc.', '99 Market Street', 'Groundsville', 'CA', '95199')",
sqlu"INSERT INTO suppliers VALUES(49, 'Superior Coffee', '1 Party Place', 'Mendocino', 'CA', '95460')",
sqlu"INSERT INTO suppliers VALUES(150, 'The High Ground', '100 Coffee Lane', 'Meadows', 'CA', '93966')"
)
The sqlu
interpolator is used for DML statements which produce a row count instead of a result set. Therefore they are of type DBIO[Int]
.
Any variable or expression injected into a query gets turned into a bind variable in the resulting query string. It is not inserted directly into a query string, so there is no danger of SQL injection attacks. You can see this used in here:
sourcedef insert(c: Coffee): DBIO[Int] =
sqlu"insert into coffees values (${c.name}, ${c.supID}, ${c.price}, ${c.sales}, ${c.total})"
The SQL statement produced by this method is always the same:
insert into coffees values (?, ?, ?, ?, ?)
Note the use of the DBIO.sequence
combinator which is useful for this kind of code:
sourceval inserts: Seq[DBIO[Int]] = Seq(
Coffee("Colombian", 101, 7.99, 0, 0),
Coffee("French_Roast", 49, 8.99, 0, 0),
Coffee("Espresso", 150, 9.99, 0, 0),
Coffee("Colombian_Decaf", 101, 8.99, 0, 0),
Coffee("French_Roast_Decaf", 49, 9.99, 0, 0)
).map(insert)
val combined: DBIO[Seq[Int]] = DBIO.sequence(inserts)
combined.map(_.sum)
Unlike the simpler DBIO.seq
combinator which runs a (varargs) sequence of database I/O actions in the given order and discards the return values, DBIO.sequence
turns a Seq[DBIO[T]]
into a DBIO[Seq[T]]
, thus preserving the results of all individual actions. It is used here to sum up the affected row counts of all inserts.
Result Sets
The following code uses the sql
interpolator which returns a result set produced by a statement. The interpolator by itself does not produce a DBIO
value. It needs to be followed by a call to .as
to define the row type:
sourcesql"""select c.name, s.name
from coffees c, suppliers s
where c.price < $price and s.id = c.sup_id""".as[(String, String)]
This results in a DBIO[Seq[(String, String)]]
. The call to as
takes an implicit GetResult
parameter which extracts data of the requested type from a result set. There are predefined GetResult
implicits for the standard JDBC types, for Options of those (to represent nullable columns) and for tuples of types which have a GetResult
. For non-standard return types you have to define your own converters:
source// Case classes for our data
case class Supplier(id: Int, name: String, street: String, city: String, state: String, zip: String)
case class Coffee(name: String, supID: Int, price: Double, sales: Int, total: Int)
// Result set getters
implicit val getSupplierResult: GetResult[Supplier] = GetResult(r => Supplier(r.nextInt(), r.nextString(),
r.nextString(), r.nextString(), r.nextString(), r.nextString()))
implicit val getCoffeeResult: GetResult[Coffee] = GetResult(r => Coffee(r.<<, r.<<, r.<<, r.<<, r.<<))
GetResult[T]
is simply a wrapper for a function PositionedResult => T
. The implicit val for Supplier
uses the explicit PositionedResult
methods getInt
and getString
to read the next Int
or String
value in the current row. The second one uses the shortcut method <<
which returns a value of whatever type is expected at this place. (Of course you can only use it when the type is actually known like in this constructor call.)
Splicing Literal Values
While most parameters should be inserted into SQL statements as bind variables, sometimes you need to splice literal values directly into the statement, for example to abstract over table names or to run dynamically generated SQL code. You can use #$
instead of $
in all interpolators for this purpose, as shown in the following piece of code:
sourceval table = "coffees"
sql"select * from #$table where name = $name".as[Coffee].headOption
Concatenating SQL statements
It is possible to combine partial SQL statements using concat
method:
source(sql"select * from coffees where " concat name.fold(sql"true")(name => sql"name = $name")).as[Coffee]
Type-Checked SQL Statements
The interpolators you have seen so far only construct a SQL statement at runtime. This provides a safe and easy way of building statements but they are still just embedded strings. If you have a syntax error in a statement or the types don’t match up between the database and your Scala code, this cannot be detected at compile-time. You can use the tsql
interpolator instead of sql
to get just that:
sourcedef getSuppliers(id: Int): DBIO[Seq[(Int, String, String, String, String, String)]] =
tsql"select * from suppliers where id > $id"
Note that tsql
directly produces a DBIOAction
of the correct type without requiring a call to .as
.
In order to give the compiler access to the database, you have to provide a configuration that can be resolved at compile-time. This is done with the StaticDatabaseConfig
annotation:
@StaticDatabaseConfig("file:src/main/resources/application.conf#tsql")
In this case it points to the path “tsql” in a local application.conf
file, which must contain an appropriate configuration for a StaticDatabaseConfig
object, not just a Database
.
You can get application.conf
resolved via the classpath (as usual) by omitting the path and only specifying a fragment in the URL, or you can use a resource:
URL scheme for referencing an arbitrary classpath resource, but in both cases, they have to be on the compiler’s own classpath, not just the source path or the runtime classpath. Depending on the build tool this may not be possible, so it’s usually better to use a relative file:
URL.
You can also retrieve the statically configured DatabaseConfig
at runtime:
sourceval dc = DatabaseConfig.forAnnotation[JdbcProfile]
import dc.profile.api._
val db = dc.db
This gives you the Slick profile for the standard api._
import and the Database
. Note that it is not mandatory to use the same configuration. You can get a Slick profile and Database
at runtime in any other way you like and only use the StaticDatabaseConfig
for compile-time checking.