diff --git a/README.md b/README.md
index 17ffab8d0..ee7454f1f 100644
--- a/README.md
+++ b/README.md
@@ -42,7 +42,7 @@ This is a table of the available modules for the various Scala versions. Not all
Modules marked with "x" are still in beta or pre-publishing mode.
-| Module name | Scala 2.10.x | Scala 2.11.x | Scala 2.12.0 | Release date |
+| Module name | Scala 2.10.x | Scala 2.11.x | Scala 2.12.0 | Release date |
| ------------ | ------------------- | ------------------| ----------------- | -------------- |
| phantom-dse | yes | yes | yes | Released |
| phantom-udt | yes | yes | yes | Released |
@@ -51,6 +51,8 @@ Modules marked with "x" are still in beta or pre-publishing mode.
| phantom-spark | x | x | x | July 2017 |
| phantom-solr | x | x | x | July 2017 |
| phantom-migrations | x | x | x | September 2017 |
+| phantom-native | x | x | x | December 2017 |
+| phantom-java-dsl | x | x | x | December 2017 |
Using phantom
=============
diff --git a/comparison.md b/comparison.md
index 52bfdd39f..812517848 100644
--- a/comparison.md
+++ b/comparison.md
@@ -39,7 +39,7 @@ So why not just use the Datastax driver for Scala? We would like to start by say
#### Cons of using the Java driver from Scala
- Writing queries is not type-safe, and the `QueryBuilder` is mutable.
-- There is no DSL to define the tables, so dealing with them is still very much manual and string based.
+- There is no DSL to define the tables, so dealing with them is still very much manual and string based.
- Auto-generation of schema is not available, the CQL for tables or anything else must be manually created.
- The driver does not and cannot prevent you from doing bad things.
- You must constantly refer to the `session`.
@@ -61,6 +61,7 @@ It is built on top of the Datastax Java driver, and uses all the default connect
- A type safe Schema DSL that means you never have to deal with raw CQL again.
- A very advanced compile time mechanism that offers a fully type safe variant of CQL.
+- A tool that prevents you from doing any "bad" things by enforcing Cassandra rules at compile time.
- A natural DSL that doesn't require any new terminology and aims to introduce a minimal learning curve. Phantom is not a leaking abstraction and it is exclusively built to target Cassnadra integration, therefore it has support for all the latest features of CQL and doesn't require constantly mapping terminology. Unlike LINQ style DSLs for instance, the naming will largely have 100% correspondence to CQL terminology you are already used to.
- Automated schema generation, automated table migrations, automated database generation and more, meaning you will never ever have to manually initialise CQL tables from scripts ever again.
- Native support of Scala concurrency primitives, from `scala.concurrent.Future` to more advanced access patterns such as reactive streams or even iteratees, available via separate dependencies.
@@ -117,43 +118,9 @@ case class Recipe(
side_id: UUID
)
-class Recipes extends CassandraTable[ConcreteRecipes, Recipe] {
+class Recipes extends CassandraTable[Recipes, Recipe] {
- object url extends StringColumn(this) with PartitionKey[String]
-
- object description extends OptionalStringColumn(this)
-
- object ingredients extends ListColumn[String](this)
-
- object servings extends OptionalIntColumn(this)
-
- object lastcheckedat extends DateTimeColumn(this)
-
- object props extends MapColumn[String, String](this)
-
- object side_id extends UUIDColumn(this)
-
-
- override def fromRow(r: Row): Recipe = {
- Recipe(
- url(r),
- description(r),
- ingredients(r),
- servings(r),
- lastcheckedat(r),
- props(r),
- side_id(r)
- )
- }
-}
-```
-
-As of version 2.0.0, phantom is capable of auto-generating the `fromRow` method, so the mapping DSL is reduced to:
-
-```
-abstract class Recipes extends CassandraTable[ConcreteRecipes, Recipe] with RootConnector {
-
- object url extends StringColumn(this) with PartitionKey[String]
+ object url extends StringColumn(this) with PartitionKey
object description extends OptionalStringColumn(this)
@@ -196,7 +163,7 @@ not possible in either of quill or the Java driver, because they do not operate
database.recipes.select.where(_.uid eqs someid)
```
-Quill, *based on our current understanding, will however happily compile and generate the query, it has no way
+Quill, *based on our current understanding*, will however happily compile and generate the query, it has no way
to know what you wanted to do with the `side_id` column.
@@ -236,7 +203,7 @@ In this category, both tools are imperfect and incomplete, and phantom has its o
the Quill comparison simply states: "You could extend Phantom by extending the DSL to add new features,
although it might not be a straightforward process.", which is a bit inaccurate.
-Being a very new player in the game, Quill is a nice toy when it comes to Cassandra feature support
+Being a very new player in the game, Quill is a nice toy when it comes to Cassandra feature support
and you will often find yourself needing to add features. Phantom has its gaps without a doubt, but it's a far far more mature alternative,
and the amount of times when extension is required are significantly rarer.
@@ -261,22 +228,21 @@ play-streams somewhere else in you app, otherwise it makes little sense to not s
much impossible to build Thrift services without a dependency on Thrift itself, so in that respect it is highly
unlikely that using those extra modules will end up bringing in more dependencies than you already have.
-- The one place where phantom used to suck is the dependency on `scala-reflect`, which is causing some ugly things inside the
-framework, namely the need for global locks to make reflection thread safe in the presence of multiple class loaders. This
-is however going away in 2.0.0, already available on `feature/2.0.0`, and we are replacing `scala-reflect` with a macro based approach.
-
+- The one place where phantom used to suck is the dependency on `scala-reflect`, which is causing some ugly things inside the
+framework, namely the need for global locks to make reflection thread safe in the presence of multiple class loaders. This
+is now a part of history, as of 2.x.x and up, we have entirely replaced the runtime mechanism with macros.
+
- The only notable dependencies of phantom are `shapeless` and `cassandra-driver-core`, the latter of which you will have
-inevitably. Shapeless is also quite light and compile time, it depends only on macro libraries such as `macro-compat`. You
-can have a look yourself [here](https://github.com/milessabin/shapeless). We also depend on an internal engine called the
-diesel engine, which itself only depends on `macro-compat`, since it's nothing more than a customised macro toolchain.
+inevitably. Shapeless is also quite light and compile time, it depends only a minuscule macro library called `macro-compat` which phantom also requires for its own macros. You
+can have a look yourself [here](https://github.com/milessabin/shapeless). `phantom-dsl` has no other dependencies.
#### Documentation and commercial support
-Both tools can do a lot better in this category, but phantom is probably doing a little better in that department,
+Both tools can do a lot better in this category, but phantom is probably doing a little better in that department,
since we have a plethora of tests, blog posts, and resources, on how to do things in phantom. This is not yet
necessarily true of Quill, and we know very well just how challenging the ramp up process to stability can be.
-In terms of commercial support, phantom wins. We don't mean to start a debase on the virtues of open source, and
+In terms of commercial support, phantom wins. We don't mean to start a debase on the virtues of open source, and
we are aware most of the development community strongly favours OSS licenses and the word "commercial" is unpleasant.
However, we are constrained by the economic reality of having to pay the people competent enough to write this software
for the benefit of us all and make sure they get enough spare time to focus on these things, which is a lot less fun.
@@ -299,8 +265,8 @@ Let's sum up the points that we tried to make here in two key paragraphs.
- Phantom used to make it less interesting to extend support for custom types, however this is now trivially
done with `Primitive.derive`, which allows users to support new types by leveraging existing primitives.
-- Quill is an excellence piece of software and it has theoretically less boilerplate than phantom. there's boilerplate that can be reduced through QDSLs that cannot be reduced through an EDSL, if we are fighting who's the leanest meanest string generator Quill wins.
+- Quill is an excellent piece of software and it has theoretically less boilerplate than phantom. there's boilerplate that can be reduced through QDSLs that cannot be reduced through an EDSL, if we are fighting who's the leanest meanest string generator Quill wins.
It's a vastly inferior tool at the application layer, and in supports such a small subset of Cassandra features it's barely usable for anything real world, and it's even more unnatural for most people. Slick popularised the concepts to some extent, but some of the most basic functionalities you would want as part of your application lifecycle are not as easily addressable through a QDSL or at least it has yet to happen.
Phantom is far more mature, feature rich, battle tested, and very widely adopted, with more resources and input from the founding team, a long standing roadmap and a key partnership with Datastax that helps us stay on top of all features.
-- Phantom is a lot easier to adopt and learn, simply as it doesn't introduce any new terminology. The mapping DSL and the `Database` object are all you need to know, so the learning curve is minimal.
\ No newline at end of file
+- Phantom is a lot easier to adopt and learn, simply as it doesn't introduce any new terminology. The mapping DSL and the `Database` object are all you need to know, so the learning curve is minimal.
diff --git a/docs/README.md b/docs/README.md
index 6434747bb..dc0b739d9 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,6 +1,6 @@
phantom
[![Build Status](https://travis-ci.org/outworkers/phantom.svg?branch=develop)](https://travis-ci.org/outworkers/phantom?branch=develop) [![Coverage Status](https://coveralls.io/repos/outworkers/phantom/badge.svg)](https://coveralls.io/r/outworkers/phantom) [![Codacy Rating](https://api.codacy.com/project/badge/grade/25bee222a7d142ff8151e6ceb39151b4)](https://www.codacy.com/app/flavian/phantom_2) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.outworkers/phantom-dsl_2.11/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.outworkers/phantom-dsl_2.11) [![Bintray](https://api.bintray.com/packages/outworkers/oss-releases/phantom-dsl/images/download.svg) ](https://bintray.com/outworkers/oss-releases/phantom-dsl/_latestVersion) [![ScalaDoc](http://javadoc-badge.appspot.com/com.outworkers/phantom-dsl_2.11.svg?label=scaladoc)](http://javadoc-badge.appspot.com/com.outworkers/phantom-dsl_2.11) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/outworkers/phantom?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
-===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
+===============================================================================================================================
Reactive type-safe Scala driver for Apache Cassandra/Datastax Enterprise
@@ -24,6 +24,3 @@ We offer a comprehensive range of elite Scala development services, including bu
- Advanced Scala and Cassandra training
We are huge advocates of open source and we will open source every project we can! To read more about our open source efforts, click [here](http://www.outworkers.com/work).
-
-
-
diff --git a/docs/basics/batches.md b/docs/basics/batches.md
new file mode 100644
index 000000000..f19508fe9
--- /dev/null
+++ b/docs/basics/batches.md
@@ -0,0 +1,73 @@
+phantom
+[![Build Status](https://travis-ci.org/outworkers/phantom.svg?branch=develop)](https://travis-ci.org/outworkers/phantom?branch=develop) [![Coverage Status](https://coveralls.io/repos/github/outworkers/phantom/badge.svg?branch=develop)](https://coveralls.io/github/outworkers/phantom?branch=develop) [![Codacy Rating](https://api.codacy.com/project/badge/grade/25bee222a7d142ff8151e6ceb39151b4)](https://www.codacy.com/app/flavian/phantom_2) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.outworkers/phantom-dsl_2.11/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.outworkers/phantom-dsl_2.11) [![Bintray](https://api.bintray.com/packages/outworkers/oss-releases/phantom-dsl/images/download.svg) ](https://bintray.com/outworkers/oss-releases/phantom-dsl/_latestVersion) [![ScalaDoc](http://javadoc-badge.appspot.com/com.outworkers/phantom-dsl_2.11.svg?label=scaladoc)](http://javadoc-badge.appspot.com/com.outworkers/phantom-dsl_2.11) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/outworkers/phantom?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
+Batch statements
+=============================================
+
+Phantom also brrings in support for batch statements. To use them, see [IterateeBigTest.scala](https://github.com/outworkers/phantom/blob/develop/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/iteratee/IterateeBigReadPerformanceTest.scala). Before you read further, you should remember **batch statements are not used to improve performance**.
+
+Read [the official docs](http://docs.datastax.com/en/cql/3.1/cql/cql_reference/batch_r.html) for more details, but in short **batches guarantee atomicity and they are about 30% slower on average than parallel writes** at least, as they require more round trips. If you think you're optimising performance with batches, you might need to find alternative means.
+
+We have tested with 10,000 statements per batch, and 1000 batches processed simultaneously. Before you run the test, beware that it takes ~40 minutes.
+
+Batches use lazy iterators and daisy chain them to offer thread safe behaviour. They are not memory intensive and you can expect consistent processing speed even with 1 000 000 statements per batch.
+
+Batches are immutable and adding a new record will result in a new Batch, just like most things Scala, so be careful to chain the calls.
+
+phantom also supports `COUNTER` batch updates and `UNLOGGED` batch updates.
+
+
+LOGGED batch statements
+===========================================================
+
+```scala
+
+import com.outworkers.phantom.dsl._
+
+Batch.logged
+ .add(db.exampleTable.update.where(_.id eqs someId).modify(_.name setTo "blabla"))
+ .add(db.exampleTable.update.where(_.id eqs someOtherId).modify(_.name setTo "blabla2"))
+ .future()
+
+```
+
+COUNTER batch statements
+============================================================
+back to top
+
+```scala
+
+import com.outworkers.phantom.dsl._
+
+Batch.counter
+ .add(db.exampleTable.update.where(_.id eqs someId).modify(_.someCounter increment 500L))
+ .add(db.exampleTable.update.where(_.id eqs someOtherId).modify(_.someCounter decrement 300L))
+ .future()
+```
+
+Counter operations also offer a standard overloaded operator syntax, so instead of `increment` and `decrement`
+you can also use `+=` and `-=` to achieve the same thing.
+
+```scala
+
+import com.outworkers.phantom.dsl._
+
+Batch.counter
+ .add(db.exampleTable.update.where(_.id eqs someId).modify(_.someCounter += 500L))
+ .add(db.exampleTable.update.where(_.id eqs someOtherId).modify(_.someCounter _= 300L))
+ .future()
+```
+
+UNLOGGED batch statements
+============================================================
+
+```scala
+
+import com.outworkers.phantom.dsl._
+
+Batch.unlogged
+ .add(db.exampleTable.update.where(_.id eqs someId).modify(_.name setTo "blabla"))
+ .add(db.exampleTable.update.where(_.id eqs someOtherId).modify(_.name setTo "blabla2"))
+ .future()
+
+```
diff --git a/docs/basics/connectors.md b/docs/basics/connectors.md
new file mode 100644
index 000000000..e69de29bb
diff --git a/docs/basics/database.md b/docs/basics/database.md
new file mode 100644
index 000000000..52301382c
--- /dev/null
+++ b/docs/basics/database.md
@@ -0,0 +1,282 @@
+### How to model your application with phantom
+
+Phantom offers an interesting first class citizen construct called the `Database` class. It seems quite simple, but it is designed to serve several purposes simultaneously:
+
+- Provide encapsulation and prevent `session` and `keySpace` or other Cassandra/Phantom specific constructs from leaking into other layers of the application.
+- Provide a configurable mechanism to allow automated schema generation.
+- Provide a type-safe way to fully describe the available domain.
+- Provide a way for test code to easily override settings via the cake pattern.
+
+Let's explore some of the design goals in more detail to understand how things work under the hood.
+
+#### Database is designed to be the final level of encapsulation
+
+Being the final level of segregation between the database layer of your application and every other layer, essentially guaranteeing encapsulation. Beyond this point, no other consumer of your database service should ever know that you are using `Cassandra` as a database.
+
+At the very bottom level, phantom queries require several implicits in scope to execute:
+
+- The `implicit session: com.datastax.driver.core.Session`, that tells us which Cassandra cluster to target.
+- The `implicit keySpace: KeySpace`, describing which keyspace to target. It's just a `String`, but it's more strongly typed as we don't want `implicit` strings in our code, ever.
+- The `implicit ex: ExecutionContextExecutor`, which is a Java compatible flavour of `scala.concurrent.ExecutionContext` and basically allows users to supply any context of their choosing for executing database queries.
+
+However, from an app or service consumer perspective, when pulling in dependencies or calling a database service, as a developer I do not want to be concerned with providing a `session` or a `keySpace`. Under some circumstances I may want to provide a custom `ExecutionContext` but that's a different story.
+
+That's why phantom comes with very concise levels of segregation between the various consumer levels. When we create a table, we mix in `RootConnector` to its companion `ConcreteTable` class. Let's look at an example:
+
+```scala
+case class Recipe(
+ url: String,
+ description: Option[String],
+ ingredients: List[String],
+ servings: Option[Int],
+ lastCheckedAt: DateTime,
+ props: Map[String, String],
+ uid: UUID
+)
+
+class Recipes extends CassandraTable[Recipes, Recipe] with RootConnector {
+
+ object url extends StringColumn(this) with PartitionKey
+
+ object description extends OptionalStringColumn(this)
+
+ object ingredients extends ListColumn[String](this)
+
+ object servings extends OptionalIntColumn(this)
+
+ object lastcheckedat extends DateTimeColumn(this)
+
+ object props extends MapColumn[String, String](this)
+
+ object uid extends UUIDColumn(this)
+
+}
+```
+The whole purpose of `RootConnector` is quite simple, it's saying an implementor will basically specify the `session` and `keySpace` of choice. It looks like this, and it's available in phantom by default via the default import, `import com.outworkers.phantom.dsl._`.
+
+```scala
+
+
+import com.datastax.driver.core.Session
+
+trait RootConnector {
+
+ implicit def space: KeySpace
+
+ implicit def session: Session
+}
+
+```
+
+Later on when we start creating databases, we pass in a `ContactPoint` or what we call a `connector` in more plain English, which basically fully encapsulates a Cassandra connection with all the possible details and settings required to run an application.
+
+```scala
+class RecipesDatabase(
+ override val connector: CassandraConnection
+) extends Database[RecipesDatabase](connector) {
+
+ object recipes extends Recipes with Connector
+
+}
+```
+The interesting bit is that the `Connector` is an inner trait inside the `connector` object that will basically statically point all implicit resolutions for a `session` and `keySpace` inside a `database` instance to a specific `session` and `keySpace`.
+
+"It seems a bit complex, why bother to go to such lengths?" The answer to that is simple, in an ideal world:
+
+- You want to invisibly and statically point to the same session object and you want to avoid all possible race conditions.
+
+- E.g you don't want multiple sessions to be instantiated or you don't want your app to connect to Cassandra multiple times just on the basis that a lot of threads are trying to write to Cassandra at the same time.
+
+- You don't want to explicitly refer to `session` every single time, because that's just Java-esque boilerplate. You wouldn't do it in CQL and that's why phantom tries to offer a more fluent DSL instead, the only distinction being in phantom you drive from entities, so the table gets "specified first".
+
+We mask away all that complexity from the end user with the help of a few constructs, `ContactPoint`, `Database` and `DatabaseProvider`.
+
+
+#### The `DatabaseProvider` injector trait
+
+Sometimes developers can choose to wrap a `database` further, into specific database services that basically move the final destination bit "up one more level", making the services the final level of encapsulation. Why do this? As you will see below, it's useful when separating services for specific entities and for guaranteeing app level consistency of data for de-normalised database entities and indexes.
+
+And this is why we offer another native construct, namely the `DatabaseProvider` trait. This is another really simple but really powerful trait that's generally used cake pattern style.
+
+```scala
+
+trait DatabaseProvider[T <: Database[T]] {
+
+ implicit def session: Session = database.session
+
+ implicit def space: KeySpace = database.space
+
+ def database: T
+
+ /**
+ * A helper shorthand method.
+ * @return A reference to the database object.
+ */
+ def db: T = database
+}
+```
+
+This is pretty simple in its design, it simply aims to provide a simple way of injecting a reference to a particular `database` inside a consumer. For the sake of argument, let's say we are designing a `UserService` backed by Cassandra and phantom. Here's how it might look like:
+
+```scala
+
+class UserDatabase(override val connector: KeySpaceDef) extends Database(connector) {
+ object users extends Users with Connector
+ object usersByEmail extends UsersByEmail with Connector
+}
+
+
+// So now we are saying we have a trait
+// that will eventually provide a reference to a specific database.
+trait AppDatabase extends DatabaseProvider[AppDatabase]
+
+trait UserService extends AppDatabase {
+
+ /**
+ * Stores a user into the database guaranteeing application level consistency of data.
+ * E.g we have two tables, one indexing users by ID and another indexing users by email.
+ * As in Cassandra we need to de-normalise data, it's natural we need to store it twice.
+ * But that also means we have to write to 2 tables every time, and here's how
+ * @param user A user case class instance.
+ * @return A future containing the result of the last write operation in the sequence.
+ */
+ def store(user: User): Future[ResultSet] = {
+ for {
+ byId <- database.users.store(user)
+ byEmail <- database.usersByEmail.store(user)
+ } yield byEmail
+ }
+
+ def findById(id: UUID): Future[Option[User]] = database.users.findById(id)
+ def findByEmail(email: String): Future[Option[User]] = database.usersByEmail.findByEmail(email)
+}
+```
+
+If I as your colleague and developer would now want to consume the `UserService`, I would basically create an instance or use a pre-existing one to basically consume methods that only require passing in known domain objects as parameters. Notice how `session`, `keySpace` and everything else Cassandra specific has gone away?
+
+All I can see is a `def storeUser(user: User)` which is all very sensible, so the entire usage of Cassandra is now transparent to end consumers. That's a really cool thing, and granted there are a few hoops to jump through to get here, it's hopefully worth the mileage.
+
+Pretty much the only thing left is the `ResultSet`, and we can get rid of that too should we choose to map it to a domain specific class. It could be useful if we want to hide the fact that we are using Cassandra completely from any database service consumer.
+
+
+#### Using the DatabaseProvider to specify environments
+
+Ok, so now we have all the elements in place to create the cake pattern, the next step is to basically flesh out the environments we want. In almost 99% of all cases, we only have two provider traits in our entire app, one for `production` or runtime mode, the other for `test`, since we often want a test cluster to fire requests against during the `sbt test` phase.
+
+
+Let's go ahead and create two complete examples. We are going to make some simple assumptions about how settings for Cassandra look like in production/runtime vs tests, but don't take those seriously, they are just for example purposes more than anything, to show you what you can do with the phantom API.
+
+Let's look at the most basic example of defining a test connector, which will use all default settings plus a call to `noHearbeat` which will disable heartbeats by setting a pooling option to 0 inside the `ClusterBuilder`. We will go through that in more detail in a second, to show how we can specify more complex options using `ContactPoint`.
+
+```scala
+
+object TestConnector {
+ val connector = ContactPoint.local
+ .noHeartbeat()
+ .keySpace("myapp_example")
+}
+
+object TestDatabase extends AppDatabase(TestConnector.connector)
+
+trait TestDatabaseProvider extends AppDatabaseProvider {
+ override def database: AppDatabase = TestDatabase
+}
+```
+
+It may feel verbose or slightly too much at first, but the objects wrapping the constructs are basically working a lot in our favour to guarantee the thread safe just in time init static access to various bits that we truly want to be static. Again, we don't want more than one contact point initialised, more than one session and so on, we want it all crystal clear static from the get go.
+
+And this is how you would use that provider trait now. We're going to assume ScalaTest is the testing framework in use, but of course that doesn't matter.
+
+```scala
+
+import org.scalatest.{BeforeAndAfterAll, OptionValues, Matchers, FlatSpec}
+import org.scalatest.concurrent.ScalaFutures
+import scala.concurrent.ExecutionContext.Implicits.global
+
+class UserServiceTest extends FlatSpec with Matchers with ScalaFutures {
+
+ val userService = new UserService with TestDatabaseProvider {}
+
+ override def beforeAll(): Unit = {
+ super.beforeAll()
+ // all our tables will now be initialised automatically against the target keyspace.
+ database.create()
+ }
+
+ it should "store a user using the user service and retrieve it by id and email" in {
+ val user = User(...)
+
+ val chain = for {
+ store <- userService.store(user)
+ byId <- userService.findById(user.id)
+ byEmail <- userService.findByEmail(user.email)
+ } yield (byId, byEmail)
+
+ whenReady(chain) { case (byId, byEmail) =>
+ byId shouldBe defined
+ byId.value shouldEqual user
+
+ byEmail shouldBe defined
+ byEmail.value shouldEqual user
+ }
+ }
+}
+
+```
+
+
+#### Automated schema generation using Database
+
+One of the coolest things you can do in phantom is automatically derive the schema for a table from its DSL definition. This is useful as you can basically forget about ever typing manual CQL or worrying about where your CQL scripts are stored and how to load them in time via bash or anything funky like that.
+
+As far as we are concerned, that was of doing things is old school and deprecated and we don't want to be looking backwards, so auto-generation to the rescue. There isn't really much to it, continuing on the above examples, it's just a question of the `create.ifNotExists()` method being available "for free".
+
+For example:
+
+```scala
+database.users.create.ifNotExists()
+```
+
+Now obviously that's the super simplistic example, so let's look at how you might implement more advanced scenarios. Phantom provides a full schema DSL including all alter and create query options so it should be quite trivial to implement any kind of query no matter how complex.
+
+Without respect to how effective these settings would be in a production environment(no do not try at home), this is meant to illustrate that you could create very complex queries with the existing DSL.
+
+```scala
+database.users
+ .create.ifNotExists()
+ .`with`(compaction eqs LeveledCompactionStrategy.sstable_size_in_mb(50))
+ .and(compression eqs LZ4Compressor.crc_check_chance(0.5))
+ .and(comment eqs "testing")
+ .and(read_repair_chance eqs 5D)
+ .and(dclocal_read_repair_chance eqs 5D)
+```
+
+To override the settings that will be used during schema auto-generation at `Database` level, phantom provides the `autocreate` method inside every table which can be easily overriden. This is again an example of chaining numerous DSL methods and doesn't attempt to demonstrate any kind of effective production settings.
+
+When you later call `database.create` or `database.createAsync` or any other flavour of auto-generation on a `Database`, the `autocreate` overriden below will be respected.
+
+```scala
+
+class UserDatabase(override val connector: KeySpaceDef) extends Database(connector) {
+ object users extends ConcreteUsers with connector.Connector {
+ def autocreate(keySpace: KeySpace): CreateQuery.Default[T, R] = {
+ create.ifNotExists()(keySpace)
+ .`with`(compaction eqs LeveledCompactionStrategy.sstable_size_in_mb(50))
+ .and(compression eqs LZ4Compressor.crc_check_chance(0.5))
+ .and(comment eqs "testing")
+ .and(read_repair_chance eqs 5D)
+ .and(dclocal_read_repair_chance eqs 5D)
+ }
+ }
+ object usersByEmail extends ConcreteUsersByEmail with connector.Connector
+}
+
+```
+
+By default, `autocreate` will simply try and perform a lightweight create query, as follows, which in the final CQL query will look very familiar. This is a simple example not related to any of the above examples.
+
+```scala
+def autocreate(keySpace: KeySpace): CreateQuery.Default[T, R] = create.ifNotExists()(keySpace)
+
+// CREATE TABLE IF NOT EXISTS $keyspace.$table (id uuid, name text, unixTimestamp timestamp, PRIMARY KEY (id, unixTimestamp))
+```
diff --git a/docs/commercial/support.md b/docs/commercial/support.md
new file mode 100644
index 000000000..e8f9cf024
--- /dev/null
+++ b/docs/commercial/support.md
@@ -0,0 +1,24 @@
+phantom
+[![Build Status](https://travis-ci.org/outworkers/phantom.svg?branch=develop)](https://travis-ci.org/outworkers/phantom?branch=develop) [![Coverage Status](https://coveralls.io/repos/github/outworkers/phantom/badge.svg?branch=develop)](https://coveralls.io/github/outworkers/phantom?branch=develop) [![Codacy Rating](https://api.codacy.com/project/badge/grade/25bee222a7d142ff8151e6ceb39151b4)](https://www.codacy.com/app/flavian/phantom_2) [![Maven Central](https://maven-badges.herokuapp.com/maven-central/com.outworkers/phantom-dsl_2.11/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.outworkers/phantom-dsl_2.11) [![Bintray](https://api.bintray.com/packages/outworkers/oss-releases/phantom-dsl/images/download.svg) ](https://bintray.com/outworkers/oss-releases/phantom-dsl/_latestVersion) [![ScalaDoc](http://javadoc-badge.appspot.com/com.outworkers/phantom-dsl_2.11.svg?label=scaladoc)](http://javadoc-badge.appspot.com/com.outworkers/phantom-dsl_2.11) [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/outworkers/phantom?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
+===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
+
+We, the people behind phantom run a software development house specialised in Scala and NoSQL. If you are after enterprise grade training or production support for using phantom and Cassandra, [Outworkers](http://outworkers.com) is here to help!
+
+We offer a comprehensive range of elite Scala development services, including but not limited to:
+
+- Software development
+- Remote contractors for hire
+- Advanced Scala and Cassandra training
+
+
+We are big fans of open source and we will open source every project we can! To read more about our OSS efforts, click [here](http://www.outworkers.com/work).
+
+### phantom-pro
+
+`phantom-pro` is a heavily upgraded version of Phantom which includes several extremely powerful features that will not be available in the open source version, including and not limited to:
+
+- Full online access to a complete set of phantom tutorials, accompanied by our direct support.
+- Advanced support for integrating phantom with Apache Spark.
+- A very powerful schema management framework called `phantom-migrations` which allows you not only to completely automate schema management and do it all in Scala, but also to let phantom automatically handle most use cases for you.
+- `phantom-autotables`, an advanced macro based framework which will auto-generate, auto-manage, and auto-migrate all your queries from case classes.
+- Full support for integration with Datastax Enterprise and Datastax Ops Center, including protocol level support for authentication, reporting, and much more.
diff --git a/docs/quickstart.md b/docs/quickstart.md
index 0b4d532d3..60b928929 100644
--- a/docs/quickstart.md
+++ b/docs/quickstart.md
@@ -5,14 +5,13 @@
===================================================================
back to top
-There are no additional resolvers required for any version of phantom newer than 2.0.0. All Outworkers libraries are open source,
-licensed via Apache V2. As of version 2.2.1, phantom has no external transitive dependencies other than shapeless
+There are no additional resolvers required for any version of phantom newer than 2.0.0. All Outworkers libraries are open source, licensed via Apache V2. As of version 2.2.1, phantom has no external transitive dependencies other than shapeless
and the Java driver.
#### For must things, all you need is a dependency on the phantom-dsl module.
-For most things, all you need is the main ```phantom-dsl``` module. This will bring in the default module with all the query generation ability, as well as `phantom-connectors` and database objects that help you manage your entire database layer on the fly. All other modules implement enhanced integration with other tools, but you don't need them to get started.
-This module only depends on the `datastax-java-driver` and the `shapeless-library`.
+For most things, all you need is the main `phantom-ds` module. This will bring in the default module with all the query generation ability, as well as `phantom-connectors` and database objects that help you manage your entire database layer on the fly. All other modules implement enhanced integration with other tools, but you don't need them to get started.
+This module only depends on the `datastax-java-driver` and the `shapeless` library.
```scala
libraryDependencies ++= Seq(
@@ -69,4 +68,4 @@ Spray users will probably be affected by a conflict in shapeless versions. To fi
libraryDependencies ++= Seq(
"io.spray" %% "spray-routing-shapeless2" % SprayVersion
)
-```
\ No newline at end of file
+```
diff --git a/phantom-dsl/src/main/resources/default.logback.xml b/phantom-dsl/src/main/resources/default.logback.xml
new file mode 100644
index 000000000..1e2c814c9
--- /dev/null
+++ b/phantom-dsl/src/main/resources/default.logback.xml
@@ -0,0 +1,19 @@
+
+
+
+
+
+ %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/phantom-dsl/src/main/scala/com/outworkers/phantom/CassandraTable.scala b/phantom-dsl/src/main/scala/com/outworkers/phantom/CassandraTable.scala
index 11c75ffe8..f5fb16f98 100644
--- a/phantom-dsl/src/main/scala/com/outworkers/phantom/CassandraTable.scala
+++ b/phantom-dsl/src/main/scala/com/outworkers/phantom/CassandraTable.scala
@@ -23,7 +23,6 @@ import com.outworkers.phantom.column.AbstractColumn
import com.outworkers.phantom.connectors.KeySpace
import com.outworkers.phantom.macros.TableHelper
import org.slf4j.{Logger, LoggerFactory}
-import shapeless.Typeable
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
diff --git a/phantom-dsl/src/main/scala/com/outworkers/phantom/database/Database.scala b/phantom-dsl/src/main/scala/com/outworkers/phantom/database/Database.scala
index 76b8435af..38f48ffd9 100644
--- a/phantom-dsl/src/main/scala/com/outworkers/phantom/database/Database.scala
+++ b/phantom-dsl/src/main/scala/com/outworkers/phantom/database/Database.scala
@@ -37,7 +37,7 @@ abstract class Database[
implicit lazy val session: Session = connector.session
- val tables: Set[CassandraTable[_, _]] = helper.tables(this.asInstanceOf[DB])
+ val tables: Seq[CassandraTable[_, _]] = helper.tables(this.asInstanceOf[DB])
def shutdown(): Unit = {
blocking {
@@ -98,7 +98,7 @@ abstract class Database[
* execute an entire sequence of queries.
*/
private[phantom] def autodrop(): ExecutableStatementList = {
- new ExecutableStatementList(tables.toSeq.map {
+ new ExecutableStatementList(tables.map {
table => table.alter().drop().qb
})
}
@@ -138,7 +138,7 @@ abstract class Database[
* execute an entire sequence of queries.
*/
private[phantom] def autotruncate(): ExecutableStatementList = {
- new ExecutableStatementList(tables.toSeq.map(_.truncate().qb))
+ new ExecutableStatementList(tables.map(_.truncate().qb))
}
/**
diff --git a/phantom-dsl/src/main/scala/com/outworkers/phantom/database/DatabaseProvider.scala b/phantom-dsl/src/main/scala/com/outworkers/phantom/database/DatabaseProvider.scala
index f57e4c634..acc7b6067 100644
--- a/phantom-dsl/src/main/scala/com/outworkers/phantom/database/DatabaseProvider.scala
+++ b/phantom-dsl/src/main/scala/com/outworkers/phantom/database/DatabaseProvider.scala
@@ -15,7 +15,19 @@
*/
package com.outworkers.phantom.database
-trait DatabaseProvider[T <: Database[T]] {
+import com.datastax.driver.core.{ Session, VersionNumber }
+import com.outworkers.phantom.connectors.{ KeySpace, SessionAugmenterImplicits }
+
+trait DatabaseProvider[T <: Database[T]] extends SessionAugmenterImplicits {
+
+ def cassandraVersion: Option[VersionNumber] = database.connector.cassandraVersion
+
+ def cassandraVersions: Set[VersionNumber] = database.connector.cassandraVersions
+
+ implicit def session: Session = database.session
+
+ implicit def space: KeySpace = database.space
+
def database: T
/**
diff --git a/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/DatabaseHelper.scala b/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/DatabaseHelper.scala
index f072ca551..6ebf21b1d 100644
--- a/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/DatabaseHelper.scala
+++ b/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/DatabaseHelper.scala
@@ -23,7 +23,7 @@ import com.outworkers.phantom.connectors.KeySpace
import scala.reflect.macros.blackbox
trait DatabaseHelper[T <: Database[T]] {
- def tables(db: T): Set[CassandraTable[_ ,_]]
+ def tables(db: T): Seq[CassandraTable[_ ,_]]
def createQueries(db: T)(implicit keySpace: KeySpace): ExecutableCreateStatementsList
}
@@ -42,7 +42,7 @@ class DatabaseHelperMacro(override val c: blackbox.Context) extends RootMacro(c)
val tpe = weakTypeOf[T]
val tableSymbol = tq"com.outworkers.phantom.CassandraTable[_, _]"
- val accessors = filterMembers[T, CassandraTable[_, _]]()
+ val accessors = filterDecls[CassandraTable[_, _]](tpe)
val prefix = q"com.outworkers.phantom.database"
@@ -57,8 +57,8 @@ class DatabaseHelperMacro(override val c: blackbox.Context) extends RootMacro(c)
q"""
new com.outworkers.phantom.macros.DatabaseHelper[$tpe] {
- def tables(db: $tpe): scala.collection.immutable.Set[$tableSymbol] = {
- scala.collection.immutable.Set.apply[$tableSymbol](..$tableList)
+ def tables(db: $tpe): scala.collection.immutable.Seq[$tableSymbol] = {
+ scala.collection.immutable.Seq.apply[$tableSymbol](..$tableList)
}
def createQueries(db: $tpe)(implicit space: $keySpaceTpe): $listType = {
diff --git a/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/RootMacro.scala b/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/RootMacro.scala
index 6a25aec05..d094c3dd2 100644
--- a/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/RootMacro.scala
+++ b/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/RootMacro.scala
@@ -185,8 +185,11 @@ class RootMacro(val c: blackbox.Context) {
def fromRow: Option[Tree] = {
if (unmatched.isEmpty) {
- val columnNames = matched.sortBy(_.left.index).map { m => q"$tableTerm.${m.right.name}.apply($rowTerm)" }
- Some(q"""new $recordType(..$columnNames)""")
+ val columnNames = matched.sortBy(_.left.index).map {
+ m => q"$tableTerm.${m.right.name}.apply($rowTerm)"
+ }
+ val tree = q"""new $recordType(..$columnNames)"""
+ Some(tree)
} else {
None
}
@@ -306,7 +309,6 @@ class RootMacro(val c: blackbox.Context) {
if (unmatchedColumns.isEmpty) {
recString
} else {
- logger.debug(s"Found unmatched types for ${printType(tableTpe)}: ${debugList(unmatchedColumns)}")
val cols = unmatchedColumns.map(f => s"${f.name.decodedName.toString} -> ${printType(f.tpe)}")
cols.mkString(", ") + s", $recString"
}
@@ -354,6 +356,10 @@ class RootMacro(val c: blackbox.Context) {
}
}
+ def filterDecls[Filter: TypeTag](source: Type): Seq[Symbol] = {
+ source.declarations.sorted.filter(_.typeSignature <:< typeOf[Filter])
+ }
+
def filterMembers[T : WeakTypeTag, Filter : TypeTag](
exclusions: Symbol => Option[Symbol] = { s: Symbol => Some(s) }
): Seq[Symbol] = {
@@ -403,4 +409,4 @@ class RootMacro(val c: blackbox.Context) {
}
}
}
-}
\ No newline at end of file
+}
diff --git a/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/TableHelper.scala b/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/TableHelper.scala
index 31d203ac6..49ca39e81 100644
--- a/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/TableHelper.scala
+++ b/phantom-dsl/src/main/scala/com/outworkers/phantom/macros/TableHelper.scala
@@ -127,13 +127,13 @@ class TableHelperMacro(override val c: whitebox.Context) extends RootMacro(c) {
/**
* Predicate that checks two fields refer to the same type.
- * @param source The source, which is a tuple of two [[Record.Field]] values.
+ * @param left The source, which is a tuple of two [[Record.Field]] values.
+ * @param right The source, which is a tuple of two [[Column.Field]] values.
* @return True if the left hand side of te tuple is equal to the right hand side
* or if there is an implicit conversion from the left field type to the right field type.
*/
- private[this] def predicate(source: (Type, Type)): Boolean = {
- val (col, rec) = source
- (col =:= rec) || (c.inferImplicitView(EmptyTree, col, rec) != EmptyTree)
+ private[this] def predicate(left: Record.Field, right: Type): Boolean = {
+ (left.tpe =:= right)// || (c.inferImplicitView(EmptyTree, left.tpe, right) != EmptyTree)
}
def variations(term: TermName): List[TermName] = {
@@ -157,7 +157,7 @@ class TableHelperMacro(override val c: whitebox.Context) extends RootMacro(c) {
): TableDescriptor = {
unprocessed match {
case recField :: tail =>
- columnFields.find { case (tpe, seq) => predicate(recField.tpe -> tpe) } map { case (_, seq) => seq } match {
+ columnFields.find { case (tpe, seq) => predicate(recField, tpe) } map { case (_, seq) => seq } match {
// It's possible that after all easy matches have been exhausted, no column fields are left to match
// with remaining record fields for the given type.
case None =>
@@ -244,10 +244,10 @@ class TableHelperMacro(override val c: whitebox.Context) extends RootMacro(c) {
columnFields: ListMap[Type, Seq[TermName]],
recordFields: List[Record.Field],
descriptor: TableDescriptor,
- unprocessed: List[Record.Field]
+ unprocessed: List[Record.Field] = Nil
): TableDescriptor = {
recordFields match { case recField :: tail =>
- columnFields.find { case (tpe, seq) => predicate(recField.tpe -> tpe) } map { case (_, seq) => seq } match {
+ columnFields.find { case (tpe, seq) => predicate(recField, tpe) } map { case (_, seq) => seq } match {
case None =>
val un = Unmatched(recField, s"Table doesn't contain a column of type ${printType(recField.tpe)}")
extractorRec(columnFields, tail, descriptor withoutMatch un, unprocessed)
@@ -349,8 +349,7 @@ class TableHelperMacro(override val c: whitebox.Context) extends RootMacro(c) {
extractorRec(
colFields.typeMap,
recordMembers.toList,
- TableDescriptor(tableTpe, recordTpe, colFields),
- Nil
+ TableDescriptor(tableTpe, recordTpe, colFields)
)
}
@@ -422,7 +421,7 @@ class TableHelperMacro(override val c: whitebox.Context) extends RootMacro(c) {
def debug: $macroPkg.Debugger = ${descriptor.debugger}
}
- new $clsName(): $macroPkg.TableHelper.Aux[$tableType, $recordType, ${descriptor.storeType}]
+ new $clsName(): $macroPkg.TableHelper[$tableType, $recordType]
"""
}
}
diff --git a/phantom-dsl/src/test/resources/logback-test.xml b/phantom-dsl/src/test/resources/logback-test.xml
index 2e7418b67..705247ff3 100644
--- a/phantom-dsl/src/test/resources/logback-test.xml
+++ b/phantom-dsl/src/test/resources/logback-test.xml
@@ -12,8 +12,8 @@
-
+
-
\ No newline at end of file
+
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/PhantomSuite.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/PhantomSuite.scala
index 48e9661e5..275423ff4 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/PhantomSuite.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/PhantomSuite.scala
@@ -33,7 +33,6 @@ import scala.concurrent.duration.Duration
trait PhantomBaseSuite extends Suite with Matchers
with BeforeAndAfterAll
- with RootConnector
with ScalaFutures
with JsonFormats
with OptionValues {
@@ -78,11 +77,9 @@ trait TestDatabaseProvider extends DatabaseProvider[TestDatabase] {
override val database: TestDatabase = TestDatabase
}
-trait PhantomSuite extends FlatSpec with PhantomBaseSuite with TestDatabase.Connector with TestDatabaseProvider {
+trait PhantomSuite extends FlatSpec with PhantomBaseSuite with TestDatabaseProvider {
def requireVersion[T](v: VersionNumber)(fn: => T): Unit = if (cassandraVersion.value.compareTo(v) >= 0) fn else ()
}
-trait PhantomFreeSuite extends FreeSpec with PhantomBaseSuite with TestDatabase.Connector {
- val database = TestDatabase
-}
+trait PhantomFreeSuite extends FreeSpec with PhantomBaseSuite with TestDatabaseProvider
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/PrimitivesTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/PrimitivesTest.scala
index 40a18b8c3..ee95a3a30 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/PrimitivesTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/PrimitivesTest.scala
@@ -94,9 +94,3 @@ class PrimitivesTest extends FlatSpec with Matchers {
"""val ev = Primitive[Option[Record]]""" should compile
}
}
-
-case class Record(value: String)
-
-object Record {
- implicit val recordPrimitive: Primitive[Record] = Primitive.derive[Record, String](_.value)(Record.apply)
-}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/domain.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/domain.scala
new file mode 100644
index 000000000..adcbb938e
--- /dev/null
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/primitives/domain.scala
@@ -0,0 +1,24 @@
+/*
+ * Copyright 2013 - 2017 Outworkers Ltd.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package com.outworkers.phantom.builder.primitives
+
+case class Record(value: String)
+
+object Record {
+ implicit val recordPrimitive: Primitive[Record] = {
+ Primitive.derive[Record, String](_.value)(Record.apply)
+ }
+}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/DistinctTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/DistinctTest.scala
index 9a7362587..03663eae3 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/DistinctTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/DistinctTest.scala
@@ -20,13 +20,13 @@ import java.util.UUID
import com.outworkers.phantom.PhantomSuite
import com.outworkers.phantom.dsl._
import com.outworkers.phantom.tables._
-import org.joda.time.DateTime
+import org.joda.time.{ DateTime, DateTimeZone }
class DistinctTest extends PhantomSuite {
override def beforeAll(): Unit = {
- super.beforeAll()
- database.tableWithCompoundKey.insertSchema()
+ super.beforeAll()
+ database.tableWithCompoundKey.insertSchema()
}
it should "return distinct primary keys" in {
@@ -59,5 +59,5 @@ class DistinctTest extends PhantomSuite {
}
}
- private[this] implicit def string2date(date: String): DateTime = new DateTime(date)
+ private[this] implicit def string2date(date: String): DateTime = new DateTime(date, DateTimeZone.UTC)
}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/InsertCasTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/InsertCasTest.scala
index 78d7652e3..069c3daff 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/InsertCasTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/InsertCasTest.scala
@@ -51,17 +51,15 @@ class InsertCasTest extends PhantomSuite {
multi <- database.primitives.select.where(_.pkey eqs row.pkey).fetch()
} yield (one, multi)
- whenReady(chain) {
- case (res1, res3) => {
- info("The one query should return a record")
- res1 shouldBe defined
+ whenReady(chain) { case (res1, res3) =>
+ info("The one query should return a record")
+ res1 shouldBe defined
- info("And the record should equal the inserted record")
- res1.value shouldEqual row
+ info("And the record should equal the inserted record")
+ res1.value shouldEqual row
- info("And only one record should be retrieved from a range fetch")
- res3 should have size 1
- }
+ info("And only one record should be retrieved from a range fetch")
+ res3 should have size 1
}
}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SelectJsonTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SelectJsonTest.scala
index c06b72253..2f01634ed 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SelectJsonTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SelectJsonTest.scala
@@ -57,10 +57,10 @@ class SelectJsonTest extends PhantomSuite {
val chain = for {
store <- database.primitives.store(row).future()
- get <- database.primitives.select(_.pkey, _.long, _.boolean, _.bDecimal, _.double, _.float, _.inet, _.int)
+ one <- database.primitives.select(_.pkey, _.long, _.boolean, _.bDecimal, _.double, _.float, _.inet, _.int)
.json()
.where(_.pkey eqs row.pkey).one()
- } yield get
+ } yield one
if (cassandraVersion.value >= Version.`2.2.0`) {
whenReady(chain) { res =>
@@ -74,4 +74,4 @@ class SelectJsonTest extends PhantomSuite {
}
}
}
-}
\ No newline at end of file
+}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SetOperationsTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SetOperationsTest.scala
index c7540b1d1..ac2eb3112 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SetOperationsTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/crud/SetOperationsTest.scala
@@ -24,7 +24,7 @@ class SetOperationsTest extends PhantomSuite {
override def beforeAll(): Unit = {
super.beforeAll()
- TestDatabase.testTable.insertSchema()
+ database.testTable.insertSchema()
}
it should "append an item to a set column" in {
@@ -37,10 +37,8 @@ class SetOperationsTest extends PhantomSuite {
db <- TestDatabase.testTable.select(_.setText).where(_.key eqs item.key).one()
} yield db
- whenReady(chain) {
- items => {
- items.value shouldBe item.setText + someItem
- }
+ whenReady(chain) { items =>
+ items.value shouldBe item.setText + someItem
}
}
@@ -49,15 +47,13 @@ class SetOperationsTest extends PhantomSuite {
val someItems = Set("test5", "test6")
val chain = for {
- insertDone <- TestDatabase.testTable.store(item).future()
- update <- TestDatabase.testTable.update.where(_.key eqs item.key).modify(_.setText addAll someItems).future()
- db <- TestDatabase.testTable.select(_.setText).where(_.key eqs item.key).one()
+ insertDone <- database.testTable.store(item).future()
+ update <- database.testTable.update.where(_.key eqs item.key).modify(_.setText addAll someItems).future()
+ db <- database.testTable.select(_.setText).where(_.key eqs item.key).one()
} yield db
- whenReady(chain) {
- items => {
- items.value shouldBe item.setText ++ someItems
- }
+ whenReady(chain) { items =>
+ items.value shouldBe item.setText ++ someItems
}
}
@@ -67,15 +63,13 @@ class SetOperationsTest extends PhantomSuite {
val removal = "test6"
val chain = for {
- insertDone <- TestDatabase.testTable.store(item).future()
- update <- TestDatabase.testTable.update.where(_.key eqs item.key).modify(_.setText remove removal).future()
- db <- TestDatabase.testTable.select(_.setText).where(_.key eqs item.key).one()
+ insertDone <- database.testTable.store(item).future()
+ update <- database.testTable.update.where(_.key eqs item.key).modify(_.setText remove removal).future()
+ db <- database.testTable.select(_.setText).where(_.key eqs item.key).one()
} yield db
- whenReady(chain) {
- items => {
- items.value shouldBe someItems.diff(Set(removal))
- }
+ whenReady(chain) { items =>
+ items.value shouldBe someItems.diff(Set(removal))
}
}
@@ -85,15 +79,13 @@ class SetOperationsTest extends PhantomSuite {
val removal = Set("test5", "test6")
val chain = for {
- insertDone <- TestDatabase.testTable.store(item).future()
- update <- TestDatabase.testTable.update.where(_.key eqs item.key).modify(_.setText removeAll removal).future()
- db <- TestDatabase.testTable.select(_.setText).where(_.key eqs item.key).one()
+ insertDone <- database.testTable.store(item).future()
+ update <- database.testTable.update.where(_.key eqs item.key).modify(_.setText removeAll removal).future()
+ db <- database.testTable.select(_.setText).where(_.key eqs item.key).one()
} yield db
- whenReady(chain) {
- items => {
- items.value shouldBe someItems.diff(removal)
- }
+ whenReady(chain) { items =>
+ items.value shouldBe someItems.diff(removal)
}
}
}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/InOperatorTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/InOperatorTest.scala
index 4cdf9933e..916b002f5 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/InOperatorTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/InOperatorTest.scala
@@ -24,21 +24,19 @@ class InOperatorTest extends PhantomSuite {
override def beforeAll(): Unit = {
super.beforeAll()
- TestDatabase.recipes.insertSchema()
+ database.recipes.insertSchema()
}
it should "find a record with a in operator if the record exists" in {
val recipe = gen[Recipe]
val chain = for {
- done <- TestDatabase.recipes.store(recipe).future()
- select <- TestDatabase.recipes.select.where(_.url in List(recipe.url, gen[EmailAddress].value)).one()
+ done <- database.recipes.store(recipe).future()
+ select <- database.recipes.select.where(_.url in List(recipe.url, gen[EmailAddress].value)).one()
} yield select
- whenReady(chain) {
- res => {
- res.value.url shouldEqual recipe.url
- }
+ whenReady(chain) { res =>
+ res.value.url shouldEqual recipe.url
}
}
@@ -46,14 +44,12 @@ class InOperatorTest extends PhantomSuite {
val recipe = gen[Recipe]
val chain = for {
- done <- TestDatabase.recipes.store(recipe).future()
- select <- TestDatabase.recipes.select.where(_.url in List(gen[EmailAddress].value)).one()
+ done <- database.recipes.store(recipe).future()
+ select <- database.recipes.select.where(_.url in List(gen[EmailAddress].value)).one()
} yield select
- whenReady(chain) {
- res => {
- res shouldBe empty
- }
+ whenReady(chain) { res =>
+ res shouldBe empty
}
}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/TupleColumnTest.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/TupleColumnTest.scala
index 923fa76e0..d2bdcdb3a 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/TupleColumnTest.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/builder/query/db/specialized/TupleColumnTest.scala
@@ -38,11 +38,9 @@ class TupleColumnTest extends PhantomSuite {
rec <- database.tuple2Table.findById(sample.id)
} yield rec
- whenReady(chain) {
- res => {
- res shouldBe defined
- res.value shouldEqual sample
- }
+ whenReady(chain) { res =>
+ res shouldBe defined
+ res.value shouldEqual sample
}
}
@@ -59,14 +57,12 @@ class TupleColumnTest extends PhantomSuite {
rec2 <- database.tuple2Table.findById(sample.id)
} yield (rec, rec2)
- whenReady(chain) {
- case (beforeUpdate, afterUpdate) => {
- beforeUpdate shouldBe defined
- beforeUpdate.value shouldEqual sample
+ whenReady(chain) { case (beforeUpdate, afterUpdate) =>
+ beforeUpdate shouldBe defined
+ beforeUpdate.value shouldEqual sample
- afterUpdate shouldBe defined
- afterUpdate.value.tp shouldEqual sample2.tp
- }
+ afterUpdate shouldBe defined
+ afterUpdate.value.tp shouldEqual sample2.tp
}
}
@@ -80,11 +76,9 @@ class TupleColumnTest extends PhantomSuite {
rec <- database.nestedTupleTable.findById(sample.id)
} yield rec
- whenReady(chain) {
- res => {
- res shouldBe defined
- res.value shouldEqual sample
- }
+ whenReady(chain) { res =>
+ res shouldBe defined
+ res.value shouldEqual sample
}
}
@@ -97,18 +91,19 @@ class TupleColumnTest extends PhantomSuite {
val chain = for {
store <- insert.future()
rec <- database.nestedTupleTable.findById(sample.id)
- update <- database.nestedTupleTable.update.where(_.id eqs sample.id).modify(_.tp setTo sample2.tp).future()
+ update <- database.nestedTupleTable.update
+ .where(_.id eqs sample.id)
+ .modify(_.tp setTo sample2.tp)
+ .future()
rec2 <- database.nestedTupleTable.findById(sample.id)
} yield (rec, rec2)
- whenReady(chain) {
- case (beforeUpdate, afterUpdate) => {
- beforeUpdate shouldBe defined
- beforeUpdate.value shouldEqual sample
+ whenReady(chain) { case (beforeUpdate, afterUpdate) =>
+ beforeUpdate shouldBe defined
+ beforeUpdate.value shouldEqual sample
- afterUpdate shouldBe defined
- afterUpdate.value.tp shouldEqual sample2.tp
- }
+ afterUpdate shouldBe defined
+ afterUpdate.value.tp shouldEqual sample2.tp
}
}
@@ -123,12 +118,10 @@ class TupleColumnTest extends PhantomSuite {
rec <- database.tupleCollectionsTable.findById(sample.id)
} yield rec
- whenReady(chain) {
- res => {
- res shouldBe defined
- res.value.id shouldEqual sample.id
- res.value.tuples should contain theSameElementsAs sample.tuples
- }
+ whenReady(chain) { res =>
+ res shouldBe defined
+ res.value.id shouldEqual sample.id
+ res.value.tuples should contain theSameElementsAs sample.tuples
}
}
@@ -142,20 +135,20 @@ class TupleColumnTest extends PhantomSuite {
val chain = for {
store <- insert.future()
rec <- database.tupleCollectionsTable.findById(sample.id)
- update <- database.tupleCollectionsTable.update.where(_.id eqs sample.id).modify(_.tuples append appended).future()
+ update <- database.tupleCollectionsTable.update
+ .where(_.id eqs sample.id)
+ .modify(_.tuples append appended)
+ .future()
rec2 <- database.tupleCollectionsTable.findById(sample.id)
} yield (rec, rec2)
- whenReady(chain) {
- case (beforeUpdate, afterUpdate) => {
+ whenReady(chain) { case (beforeUpdate, afterUpdate) =>
+ beforeUpdate shouldBe defined
+ beforeUpdate.value.id shouldEqual sample.id
+ beforeUpdate.value.tuples should contain theSameElementsAs sample.tuples
- beforeUpdate shouldBe defined
- beforeUpdate.value.id shouldEqual sample.id
- beforeUpdate.value.tuples should contain theSameElementsAs sample.tuples
-
- afterUpdate shouldBe defined
- afterUpdate.value.tuples should contain (appended)
- }
+ afterUpdate shouldBe defined
+ afterUpdate.value.tuples should contain (appended)
}
}
}
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/TestDatabase.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/TestDatabase.scala
index 31ec751e6..530615535 100644
--- a/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/TestDatabase.scala
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/TestDatabase.scala
@@ -15,8 +15,6 @@
*/
package com.outworkers.phantom.tables
-import java.util.UUID
-
import com.datastax.driver.core.SocketOptions
import com.outworkers.phantom.builder.query.CreateQuery
import com.outworkers.phantom.builder.serializers.KeySpaceSerializer
diff --git a/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/bugs/SchemaBug.scala b/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/bugs/SchemaBug.scala
new file mode 100644
index 000000000..f05a125a0
--- /dev/null
+++ b/phantom-dsl/src/test/scala/com/outworkers/phantom/tables/bugs/SchemaBug.scala
@@ -0,0 +1,31 @@
+/*
+ * Copyright 2013 - 2017 Outworkers Ltd.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package com.outworkers.phantom.tables.bugs
+
+import com.outworkers.phantom.dsl._
+
+case class SchemaBugModel(
+ id: Int,
+ quality: Int,
+ name: String
+)
+
+abstract class Schema extends CassandraTable[Schema, SchemaBugModel] with RootConnector {
+ object id extends IntColumn(this) with PartitionKey
+ object quality extends IntColumn(this)
+ object name extends StringColumn(this)
+ object eventId extends LongColumn(this)
+}
diff --git a/phantom-example/src/test/scala/com/outworkers/phantom/example/ExampleSuite.scala b/phantom-example/src/test/scala/com/outworkers/phantom/example/ExampleSuite.scala
index 8eb23fa12..4bb27efaf 100644
--- a/phantom-example/src/test/scala/com/outworkers/phantom/example/ExampleSuite.scala
+++ b/phantom-example/src/test/scala/com/outworkers/phantom/example/ExampleSuite.scala
@@ -24,7 +24,7 @@ trait RecipesDbProvider extends DatabaseProvider[RecipesDatabase] {
override def database: RecipesDatabase = RecipesDatabase
}
-trait ExampleSuite extends FlatSpec with PhantomBaseSuite with RecipesDbProvider with RecipesDatabase.Connector {
+trait ExampleSuite extends FlatSpec with PhantomBaseSuite with RecipesDbProvider {
override def beforeAll(): Unit = {
super.beforeAll()
diff --git a/phantom-streams/src/test/logback-test.xml b/phantom-streams/src/test/logback-test.xml
index 2e7418b67..705247ff3 100644
--- a/phantom-streams/src/test/logback-test.xml
+++ b/phantom-streams/src/test/logback-test.xml
@@ -12,8 +12,8 @@
-
+
-
\ No newline at end of file
+
diff --git a/phantom-streams/src/test/scala/com/outworkers/phantom/streams/suites/iteratee/BigTest.scala b/phantom-streams/src/test/scala/com/outworkers/phantom/streams/suites/iteratee/BigTest.scala
index 6b77c639c..5fbc4ec79 100644
--- a/phantom-streams/src/test/scala/com/outworkers/phantom/streams/suites/iteratee/BigTest.scala
+++ b/phantom-streams/src/test/scala/com/outworkers/phantom/streams/suites/iteratee/BigTest.scala
@@ -29,6 +29,6 @@ trait BigTest extends PhantomSuite {
.setReadTimeoutMillis(connectionTimeoutMillis)
.setConnectTimeoutMillis(connectionTimeoutMillis)
)
- ).noHeartbeat().keySpace(keySpace).session
+ ).noHeartbeat().keySpace(space.name).session
}
}
diff --git a/phantom-thrift/src/test/resources/log4j.xml b/phantom-thrift/src/test/resources/log4j.xml
index cb090accb..3bba1ec4d 100644
--- a/phantom-thrift/src/test/resources/log4j.xml
+++ b/phantom-thrift/src/test/resources/log4j.xml
@@ -24,7 +24,7 @@
-
+