Introduction

What is Slick?

Slick is Typesafe‘s Functional Relational Mapping (FRM) library for Scala that makes it easy to work with relational databases. It allows you to work with stored data almost as if you were using Scala collections while at the same time giving you full control over when a database access happens and which data is transferred. You can also use SQL directly. Execution of database actions is done asynchronously, making Slick a perfect fit for your reactive applications based on Play and Akka.

val limit = 10.0

// Your query could look like this:
( for( c <- coffees; if c.price < limit ) yield c.name ).result

// Or using Plain SQL String Interpolation:
sql"select COF_NAME from COFFEES where PRICE < $limit".as[String]

// Both queries result in SQL equivalent to:
// select COF_NAME from COFFEES where PRICE < 10.0

When using Scala instead of raw SQL for your queries you benefit from compile-time safety and compositionality. Slick can generate queries for different back-end databases including your own, using its extensible query compiler.

Get started learning Slick in minutes using the Hello Slick template in Typesafe Activator.

Features

Scala

  • Queries, Table & Column Mappings, and types are plain Scala
class Coffees(tag: Tag) extends Table[(String, Double)](tag, "COFFEES") {
  def name = column[String]("COF_NAME", O.PrimaryKey)
  def price = column[Double]("PRICE")
  def * = (name, price)
}
val coffees = TableQuery[Coffees]
  • Data access APIs similar to Scala collections
// Query that only returns the "name" column
coffees.map(_.name)

// Query that does a "where price < 10.0"
coffees.filter(_.price < 10.0)

Type-Safe

  • Let your IDE help you write your code
  • Find problems at compile-time instead of at runtime
// The result of "select PRICE from COFFEES" is a Seq of Double
// because of the type safe column definitions
val coffeeNames: Future[Seq[Double]] = db.run(
  coffees.map(_.price).result
)

// Query builders are type safe:
coffees.filter(_.price < 10.0)
// Using a string in the filter would result in a compilation error

Composable

  • Queries are functions that can be composed and reused
// Create a query for coffee names with a price less than 10, sorted by name
coffees.filter(_.price < 10.0).sortBy(_.name).map(_.name)
// The generated SQL is equivalent to:
// select name from COFFEES where PRICE < 10.0 order by NAME

Supported database systems

Other SQL databases can be accessed right away with a reduced feature set. Writing a fully featured plugin for your own SQL-based backend can be achieved with a reasonable amount of work. Support for other backends (like NoSQL) is under development but not yet available.

The following capabilities are supported by the drivers. “Yes” means that a capability is fully supported. In other cases it may be partially supported or not at all. See the individual driver’s API documentation for details.

Driver Capabilities
Capability DerbyDriver H2Driver HsqldbDriver AccessDriver MySQLDriver PostgresDriver SQLiteDriver
relational.other Yes Yes Yes Yes Yes Yes Yes
relational.columnDefaults Yes Yes Yes   Yes Yes Yes
relational.foreignKeyActions Yes Yes Yes   Yes Yes Yes
relational.functionDatabase   Yes Yes   Yes Yes  
relational.functionUser Yes Yes Yes   Yes Yes  
relational.indexOf Yes Yes Yes Yes Yes Yes Yes
relational.joinFull     Yes     Yes  
relational.joinLeft Yes Yes Yes Yes Yes Yes Yes
relational.joinRight Yes Yes Yes Yes Yes Yes  
relational.likeEscape Yes Yes Yes   Yes Yes Yes
relational.pagingDrop Yes Yes Yes   Yes Yes Yes
relational.pagingNested   Yes Yes Yes Yes Yes Yes
relational.pagingPreciseTake Yes Yes Yes   Yes Yes Yes
relational.repeat   Yes Yes Yes Yes Yes Yes
relational.replace   Yes Yes Yes Yes Yes Yes
relational.reverse     Yes Yes Yes Yes Yes
relational.setByteArrayNull Yes Yes Yes   Yes Yes Yes
relational.typeBigDecimal Yes Yes Yes   Yes Yes  
relational.typeBlob Yes Yes Yes   Yes Yes  
relational.typeLong Yes Yes Yes   Yes Yes Yes
relational.zip   Yes Yes   Yes Yes  
sql.other Yes Yes Yes Yes Yes Yes Yes
sql.sequence Yes Yes Yes   Yes Yes  
sql.sequenceCurr   Yes   Yes Yes Yes Yes
sql.sequenceCycle     Yes Yes Yes Yes Yes
sql.sequenceLimited Yes Yes Yes Yes   Yes Yes
sql.sequenceMax Yes   Yes Yes Yes Yes Yes
sql.sequenceMin Yes   Yes Yes Yes Yes Yes
jdbc.other Yes Yes Yes Yes Yes Yes Yes
jdbc.booleanMetaData   Yes Yes Yes Yes Yes  
jdbc.createModel Yes Yes Yes   Yes Yes Yes
jdbc.defaultValueMetaData Yes Yes Yes Yes Yes Yes  
jdbc.distinguishesIntTypes Yes Yes Yes Yes Yes Yes  
jdbc.forceInsert Yes Yes Yes Yes Yes Yes Yes
jdbc.insertOrUpdate         Yes    
jdbc.mutable Yes Yes Yes Yes Yes Yes  
jdbc.nullableNoDefault Yes Yes Yes Yes     Yes
jdbc.returnInsertKey Yes Yes Yes   Yes Yes Yes
jdbc.returnInsertOther     Yes     Yes  
jdbc.supportsByte   Yes Yes Yes Yes    

License

Slick is released under a BSD-Style free and open source software license. See the chapter on the commercial Slick Extensions add-on package for details on licensing the Slick drivers for the big commercial database systems.

Compatibility Policy

Slick requires Scala 2.10 or 2.11. (For Scala 2.9 please use ScalaQuery, the predecessor of Slick).

Slick version numbers consist of an epoch, a major and minor version, and possibly a qualifier (for milestone, RC and SNAPSHOT versions).

For release versions (i.e. versions without a qualifier), backward binary compatibility is guaranteed between releases with the same epoch and major version (e.g. you could use 2.1.2 as a drop-in relacement for 2.1.0 but not for 2.0.0). Slick Extensions requires at least the same minor version of Slick (e.g. Slick Extensions 2.1.2 can be used with Slick 2.1.2 but not with Slick 2.1.1). Binary compatibility is not preserved for slick-codegen, which is generally used at compile-time.

We do not guarantee source compatibility but we try to preserve it within the same major release. Upgrading to a new major release may require some changes to your sources. We generally deprecate old features and keep them around for a full major release cycle (i.e. features which become deprecated in 2.1.0 will not be removed before 2.2.0) but this is not possible for all kinds of changes.

Release candidates have the same compatibility guarantees as the final versions to which they lead. There are no compatibility guarantees whatsoever for milestones and snapshots.

Query APIs

The Lifted Embedding is the standard API for type-safe queries and updates in Slick. Please see Getting Started for an introduction. Most of this user manual focuses on the Lifted Embedding.

For writing your own SQL statements you can use the Plain SQL API.

The experimental Direct Embedding is available as an alternative to the Lifted Embedding.

Lifted Embedding

The name Lifted Embedding refers to the fact that you are not working with standard Scala types (as in the direct embedding) but with types that are lifted into a Rep type constructor. This becomes clear when you compare the types of a simple Scala collections example

case class Coffee(name: String, price: Double)
val coffees: List[Coffee] = //...

val l = coffees.filter(_.price > 8.0).map(_.name)
//                       ^       ^          ^
//                       Double  Double     String

... with the types of similar code using the lifted embedding:

class Coffees(tag: Tag) extends Table[(String, Double)](tag, "COFFEES") {
  def name = column[String]("COF_NAME")
  def price = column[Double]("PRICE")
  def * = (name, price)
}
val coffees = TableQuery[Coffees]

val q = coffees.filter(_.price > 8.0).map(_.name)
//                       ^       ^          ^
//               Rep[Double]  Rep[Double]  Rep[String]

All plain types are lifted into Rep. The same is true for the table row type Coffees which is a subtype of Rep[(String, Double)]. Even the literal 8.0 is automatically lifted to a Rep[Double] by an implicit conversion because that is what the > operator on Rep[Double] expects for the right-hand side. This lifting is necessary because the lifted types allow us to generate a syntax tree that captures the query computations. Getting plain Scala functions and values would not give us enough information for translating those computations to SQL.