Schemas
This chapter describes how to work with database schemas in Scala code, in particular how to write them manually, which is useful when you start writing an application without a pre-existing database. If you already have a schema in the database, you can also use the code generator to take this work off your hands.
In the code examples below we assume the following imports:
sourceimport java.sql.Date
import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
import scala.reflect.ClassTag
import slick.jdbc.H2Profile.api.*
import slick.lifted.ProvenShape
If you’re new to Slick, please start with the Getting Started page.
Table Rows
In order to use the Scala API for type-safe queries, you need to define Table
row classes for your database schema. These describe the structure of the tables:
sourceclass Coffees(tag: Tag) extends Table[(String, Int, Double, Int, Int)](tag, "COFFEES") {
def name = column[String]("COF_NAME", O.PrimaryKey)
def supID = column[Int]("SUP_ID")
def price = column[Double]("PRICE")
def sales = column[Int]("SALES", O.Default(0))
def total = column[Int]("TOTAL", O.Default(0))
def * = (name, supID, price, sales, total)
}
All columns are defined through the column
method. Each column has a Scala type and a column name for the database (usually in upper-case). The following primitive types are supported out of the box for JDBC-based databases in JdbcProfile
(with certain limitations imposed by the individual database profiles):
- Numeric types: Byte, Short, Int, Long, BigDecimal, Float, Double
- LOB types: java.sql.Blob, java.sql.Clob, Array[Byte]
- Date types: java.sql.Date, java.sql.Time, java.sql.Timestamp
- Java 8 date and time types: java.time.*
- Boolean
- String
- Unit
- java.util.UUID
Nullable columns are represented by Option[T]
where T
is one of the supported primitive types.
Note: Currently all operations on Option values use the database’s null propagation semantics which may differ from Scala’s Option semantics. In particular, None === None
evaluates to None
. This behaviour may change in a future major release of Slick.
After the column name, you can add optional column options to a column
definition. The applicable options are available through the table’s O
object. The following ones are defined for JdbcProfile
:
-
PrimaryKey
: Mark the column as a (non-compound) primary key when creating the DDL statements. -
Default[T](defaultValue: T)
: Specify a default value for inserting data into the table without this column. This information is only used for creating DDL statements so that the database can fill in the missing information. -
Unique
: Add a uniqueness constraint to the DDL statement for the column. -
SqlType(typeName: String)
: Use a non-standard database-specific type for the DDL statements (e.g.SqlType("VARCHAR(20)")
for aString
column). -
AutoInc
: Mark the column as an auto-incrementing key when creating the DDL statements. Unlike the other column options, this one also has a meaning outside of DDL creation: Many databases do not allow non-AutoInc columns to be returned when inserting data (often silently ignoring other columns), so Slick will check if the return column is properly marked as AutoInc where needed.
Every table requires a *
method containing a default projection. This describes what you get back when you return rows (in the form of a table row object) from a query. Slick’s *
projection does not have to match the one in the database. You can add new columns (e.g. with computed values) or omit some columns as you like. The non-lifted type corresponding to the *
projection is given as a type parameter to Table
. For simple, non-mapped tables, this will be a single column type or a tuple of column types.
If your database layout requires schema names, you can specify the schema name for a table in front of the table name, wrapped in Some()
:
sourceclass Coffees(tag: Tag)
extends Table[(String, Int, Double, Int, Int)](tag, Some("MYSCHEMA"), "COFFEES") {
//...
}
See notes here on the java.time.* types
Table Query
Alongside the Table
row class you also need a TableQuery
value which represents the actual database table:
sourceval coffees = TableQuery[Coffees]
The simple TableQuery[T]
syntax is a macro which expands to a proper TableQuery instance that calls the table’s constructor (new TableQuery(new T(_))
).
You can also extend TableQuery
to use it as a convenient namespace for additional functionality associated with the table:
sourceobject coffees extends TableQuery(new Coffees(_)) {
val findByName = this.findBy(_.name)
}
Mapped Tables
It is possible to define a mapped table that uses a custom type for its *
projection by adding a bi-directional mapping to the custom type.
Mapped projections can also be created from individual columns, tuples of columns, or existing mapped projections.
Case classes with the mapTo
macro
It is possible to use the convenience method mapTo
in most circumstances, which uses a compile-time macro to automatically fill in mapping functions.
sourcecase class Person(id: Option[Int], first: String, last: String)
class People(tag: Tag) extends Table[Person](tag, "person") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = (id.?, first, last).mapTo[Person]
}
val people = TableQuery[People]
The macro should work for most case classes, even those with user-defined companion objects. However there are still come cases where you will have to use the <>
operator, as described below.
Arbitrary mapping functions with the <>
operator
You can also map to arbitrary types by using the <>
operator. It takes two functions as parameters, one for mapping from the raw value (which may be a tuple) to the custom type, and one for mapping back.
The API is designed to be optimal for Scala 2 case classes, which generate a companion object with a tupled
method, and an unapply
method that returns an Option
of the case class.
When using it with arbitrary mapping functions, however, it can be useful to call .shaped
on the column or columns tuple, to get its type inferred properly. Otherwise you may have to add full type annotations to the mapping functions.
sourcecase class User(id: Option[Int], first: String, last: String)
class Users(tag: Tag) extends Table[User](tag, "users") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def first = column[String]("first")
def last = column[String]("last")
def * = (id.?, first, last) <> ((User.apply _).tupled, User.unapply _)
}
val users = TableQuery[Users]
In Scala 2, case classes generate a default companion object that extends a function type. For instance, in the above snippet the compiler generates object User extends ((Option[Int], String, String) => User)
. Since scala function types contain a tupled
method, in the above snippet you could replace (User.apply _).tupled
with just User.tupled
. However, this is not a given.
- In Scala 2, if you explicitly define a companion object, it will not automatically extend a function type.
- In Scala 3, companions never extend a function type.
The workaround is to lift the apply
method into a function type, and call tupled
on that. That is why in the above snippet we use (User.apply _).tupled
instead of User.tupled
.
Alternatively, you can manually define .tupled
in the companion object:
object User {
def tupled = (apply _).tupled
}
Constraints
A foreign key constraint can be defined with a Table’s foreignKey
method. It first takes a name for the constraint, the referencing column(s) and the referenced table. The second argument list takes a function from the referenced table to its referenced column(s) as well as ForeignKeyAction
for onUpdate
and onDelete
, which are optional and default to NoAction
. When creating the DDL statements for the table, the foreign key definition is added to it.
sourceclass Suppliers(tag: Tag) extends Table[(Int, String, String, String, String, String)](tag, "SUPPLIERS") {
def id = column[Int]("SUP_ID", O.PrimaryKey)
//...
}
val suppliers = TableQuery[Suppliers]
class Coffees(tag: Tag) extends Table[(String, Int, Double, Int, Int)](tag, "COFFEES") {
def supID = column[Int]("SUP_ID")
//...
def supplier = foreignKey("SUP_FK", supID, suppliers)(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
// compiles to SQL:
// alter table "COFFEES" add constraint "SUP_FK" foreign key("SUP_ID")
// references "SUPPLIERS"("SUP_ID")
// on update RESTRICT on delete CASCADE
}
val coffees = TableQuery[Coffees]
Independent of the actual constraint defined in the database, such a foreign key can be used to navigate to the referenced data with a join. For this purpose, it behaves the same as a manually defined utility method for finding the joined data:
sourcedef supplier = foreignKey("SUP_FK", supID, suppliers)(_.id, onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)
def supplier2 = suppliers.filter(_.id === supID)
A primary key constraint can be defined in a similar fashion by adding a method that calls primaryKey
. This is useful for defining compound primary keys (which cannot be done with the O.PrimaryKey
column option):
sourceclass A(tag: Tag) extends Table[(Int, Int)](tag, "a") {
def k1 = column[Int]("k1")
def k2 = column[Int]("k2")
def * = (k1, k2)
def pk = primaryKey("pk_a", (k1, k2))
// compiles to SQL:
// alter table "a" add constraint "pk_a" primary key("k1","k2")
}
Other indexes are defined in a similar way with the index
method. They are non-unique by default unless you set the unique
parameter:
sourceclass A(tag: Tag) extends Table[(Int, Int)](tag, "a") {
def k1 = column[Int]("k1")
def k2 = column[Int]("k2")
def * = (k1, k2)
def idx = index("idx_a", (k1, k2), unique = true)
// compiles to SQL:
// create unique index "idx_a" on "a" ("k1","k2")
}
All constraints are discovered reflectively by searching for methods with the appropriate return types which are defined in the table. This behavior can be customized by overriding the tableConstraints
method.
Data Definition Language
DDL statements for a table can be created with its TableQuery
’s schema
method. Multiple DDL
objects can be concatenated with ++
to get a compound DDL
object which can create and drop all entities in the correct order, even in the presence of cyclic dependencies between tables. The create
, createIfNotExists
, dropIfExists
, drop
and truncate
methods produce the Actions for executing the DDL statements. To safely create and drop tables use the methods createIfNotExists
and dropIfExists
:
sourceval schema = coffees.schema ++ suppliers.schema
db.run(DBIO.seq(
schema.create,
schema.createIfNotExists,
//...
schema.drop,
schema.dropIfExists
))
You can use the the statements
method to get the SQL code, like for most other SQL-based Actions. Schema Actions are currently the only Actions that can produce more than one statement.
sourceschema.create.statements.foreach(println)
schema.createIfNotExists.statements.foreach(println)
schema.truncate.statements.foreach(println)
schema.drop.statements.foreach(println)
schema.dropIfExists.statements.foreach(println)