In the /server/src/config
, copy .env-example
into .env-development
and
populated the provided fields as needed. This file is required
when
developing.
In the .env-development
file, configure your database host to localhost
and
the port specified in the postgres
service in the docker-compose.yml
, which
is 54320
by default. Please note the following two things:
-
Be sure that the database credentials you specifiy do infact point to the correct service; if you have an existing service running at the specified on your machine host and port, the server may not accurately report a connection error when connecting to what it expects to be a database, and the server may hang during initial startup
-
If you want to change any values in the
docker-compose.yml
file, you must set those corresponding values inYOUR ENVIRONMENT
; that is, create a matching environment variable with a custom value of your own choice, anddocker
should honor that;DO NOT
make changs todocker-compose.yml
and commit them.
You must also configure the google api section of .env-developent
in order
for the google api's to be accessible. Those steps are as follows:
Steps:
- Create a project in Google Cloud Platform
- Navigate to "Menu" > "APIs & Services"
- Click, "Enable APIS AND SERVICES"
- Search for "Google Calendar API" and enable
- Search for "Google People API" and enable
- From the project dashboard (menu icon, top left),
click "APIs & services" > Dashboard
- Click, "OAuth consent screen"
- Select, "External"
- Click, "Create"
- Fill out the page accordingly and click, "Save"
- From the project dashboard (menu icon, top left),
click "APIs & services" > Dashboard
- Click, left navigation menu: "Credentials"
- Click, "+ CREATE CREDENTIALS"
- Click, "Create OAuth client ID"
- Select, Web application
- Name the app:
Gairos
- Apply restrictions (optiona; do as needed)
- You are then prompted with a dialog that contains a Client ID and Client Secret
- Record these in your
.env-development
where appropriate- IMPORANT: callback urls must match here and in the server
As an attempt to approve developer experience, you can either set:
-
DB_SYNC_WITH_SEQUELIZE
totrue
orfalse
.Note:
This option will be set forcibly tofalse
if theNODE_ENV
matches*prod*
-
DEV_AUTO_LOGIN
totrue
orfalse
, which will intialize a session as using the first seeded user when you make your first request to the server. This is useful to prevent having to log in manually each time.- This is particularly useful for when developin gon the back-end
See section on
GraphiQL Setup
for more info on authentication when working on the back-end - If working on the front-end, you may want to turn this off so that you can test the Google API configuration through your own account (as the seeded user does not have a google account)
- This is particularly useful for when developin gon the back-end
See section on
- If
true
:- When the server boots up the database will be completely emptied, and
the seeders will be executed to populate the database with any default rows
FOR NON-PRODUCTION ENVIRONMENTS ONLY
- When the server boots up the database will be completely emptied, and
the seeders will be executed to populate the database with any default rows
- If
false
:- The tables should still be automatically created, and they
WILL NOT
be truncated, nor will the seeders be ran- You can run the seeders manually in the commandline yourself
NOTE:
if you make changes to amodel definition
, the changeWILL NOT
be reflected in the database unless you either:- Set this option to
true
, or... - Delete the existing table yourself and restart the server
- Set this option to
- The tables should still be automatically created, and they
IMPORTANT:
the following steps are for DEVELOPMENT
ONLY:
If you want to use the Apollo Graph Manager
, which is a metrics dashboard
that offers some free features (such as reporting metrics on graphql queries),
you must implement the following configuration.
-
Visit https://engine.apollographql.com/ and create a graph such as
gairos-development
(which is the same as theservice name
) that apollo will use to register this engine into the graph manager -
Copy the file
API key
presented to you into the root.env
fileIMPORTANT:
server/.env
is to be used ONLY for this purpose, all other configurations should be applied inserver/src/config/.env-<ENVIRONMENT>
-
The
server/apollo.config.js
file is already configured with the aforementionedservice name
andendpoint
(which must contain the/graphql
when on localhost, at least) -
Run this command in the root of
/server
:npx apollo service:push
-
You should now be able to view metrics on this server in the dashboard
-
You should re-run the aforementioned command as the schema changes
The database server must be running in order for the server to start.
To start the database, run: docker-compose up
in the root of this project. View
the docker-compose.yml
file for details on the database services. If you are
using a database client locally, you can access the database via the same
address mentioned at the beginning of the previous section.
Or, you may opt to use the provided pgadmin4
in your browser at
localhost:54321
. The credentials are in docker-compose.yml
, and the
username/password are both: admin
; both values are default.
Due to a bug with either docker-compose
or the pgadmin4
container provided,
the specified server settings may not be auto-populated by the pgadmin4
container upon starting. Therefore, you may need to provide a server
connection in the pgadmin4
web app upon initial setup after logging in.
You may use either:
- Your
host IP address; NOT localhost!
and the port54320
as defined indocker-compose.yml
to connect. This is due to how the container network has been configured. The containers are not bridged to the host network, so specifyinglocalhost
here refers to thepgadmin4
container itself, not your machine. Since your IP address may change, I suggest the following option... - You may use the static ip address configured by a custom network for this
setup, which also defined in
docker-compose.yml
, the static ip is defaulted to172.25.0.2
. Since you are using the static ip in the containerized network, you will need to use the internal port being5432
instead of the port mapped from the host. These two values are default, and cannot be configured by custom environment variables reliably at this moment. - Maybe later I will bridge the network config, but for now, this works...
The recommended way to run the server is to use:
yarn run watch:debug
- will automatically re-run the server when you change any file in
server/src
(seenodemon.json
to see which files are watched), as well as log your debug messages (implemented with thedebug npm module
)
- will automatically re-run the server when you change any file in
yarn run watch
- this option does not print your debug messages
If you do not want the server to restart automatically on your changes, run either:
yarn run dev
- or
yarn run dev:debug
to see your debug mesages
The order in which the server is built looks like this (initialized by the commands mentioned above):
- lint (via
eslint
) - create database, see
server/src/db
for details (may error out if db exists, but does not stop build process. This is expected behavior) - run server with babel-node (
server/bin/www.js
) to supportimport/export
in the node environment- in
server/src/db
, conditionally truncate/sync database and execute seeders asynchronously based onDB_SYNC_WITH_SEQUELIZE
- in
It is recommended to use the debug npm module
in your files, with the server:
prefix, followed by an identifier of your choosing (i.e. server:my-feature
). These are only printed to the log when using the watch:debug
command.
If you are having build issues, I suggest setting an environment variable
DEBUG=*
and running yarn run dev
to see the full debug information from each
3rd party library.
If you want to run unit tests, first create the .env-test
file in
server/src/config
, similarly to how you created .env-development
. This file
is required
when running testing locally. Note that the yarn run test:*
commands do not explicitly set NODE_ENV
to test
. This is so that these
commands can be recycled in the CI/CD
pipeline; you must set NODE_ENV=test
in your .env-test
file.
Then run either of the following commands:
yarn run test
yarn run watch:test
will re-run tests when you make file changes; seenodemon-test.json
for which files are being watched
Note:
-
if your development server is currently running when attempting to run tests, and if your
.env-test
has your test server listening on the same port, the tests cannot run. It is recommended to run only one environment at a time at this moment; otherwise, configure your.env-test
differently.Note: they may use the same DB_* credentials, it is just the APP_* variables that I am referring to
-
either of the above testing options will print debug messages that use the
debug npm module
. The messages, when printed, will include the test suite name, and name of the test itself in hopes to make the logs more readable, assisting with debugging. Seeserver/test
for how that utilize this logger.
I recommend the following workflow when creating a new API resource:
-
create the sequelize
model
either way:- Via
sequelize-cli
- move the outputted
model
to a new folder, corresponding to the model name under thegairos
api:server/api/gairos
(or the other API directories as needed), and rename the file tomodel.js
- move the outputted
- Manually under
server/api/gairos/<NEW_MODEL|RESOURCE>/model.js
directory
- Via
-
convert the sequelize
model
to ES6+ syntax (as needed), as is consitent with this project, and complete its schema in this file (used for the database)- It would be ideal to have the relationships between this
model
and the others completed here at this point as well
- It would be ideal to have the relationships between this
-
create and complete the
schema.graphql
for themodel
(used for the graphql API response ). The schema does not necessarily have to match the sequelize model -
create the
resolvers.js
for themodel
; don't complete this yet! -
create the
datasource.js
for themodel
; this will encapsulate the business logic, and is ultimately used by theresolvers
and*.spec.js
filesThis file should
export default
aname
andClass
property; theClass
needs to be equal to a class that extends theapollo-datasource
class, calledDataSource
. If thedatasource
that you are creating is fetching data from an externalREST API
, useimport
theapollo-datasource-rest
and extend theRESTDataSource
class. The exports are used to dynamically load this this resource with less work on your part.Note: The
resolvers
only need to worry about whether or not they are getting data, not on 'how' they are doing it. You can think of thedatasource
as being theAPI
to your model, or acontroller
in amodel-view-controller
app.Data Sources
are also implemented here in attempt to be forward-thinking, and will hopefully implement caching ofdatabase requests
in the future -
create an
index.js
file for the resource, andimport
+export default
themodel
,typeDefs
,resolvers
, anddataSource
-
create the
server/api/gairos/MODEL/test
directory -
in this directory, create the following files for this resource:
unit.spec.js
andint.spec.js
(stands for integration tests), ande2e.spec.js
(stands for end-to-end tests).These are used to execute a Test Driven Design (TDD) development approach. I will refer you to thePractical Test Pyramid | Unit testing pyramid
.NOTE: only create these files if you are going to put tests in them, since the test runner expects the test files to have at least one test, otherwise the tests will fail.
-
index.js
the index file in the test directory for this resource should export anymock data
used in your tests. Themock data
should be exactly what your tests should expect in the results in order to pass. This file is purely a tool for organization purposesThis file should exports:
mockQueries
, which are mapped to the graphql query in your schema that this is meant for. The way they are mapped is purely for organizational purposesmockMutations
, which are mapped to the graphql mutation in your schema that it is meant for (likemockQueries
)mockResponses
, which are mapped to theAPI
methods that you're testing in yourdatasource
.- If this
datasource
is referencing a3rd party API
, then you should test the actual response from their API against how we shape it afterwards. I do this by using a customreducer
method, which simply maps only the fields I intend on using from the3rd party API
- If this
datasource
is referencing a1st party API
, this is not a requirement - The methods that are used by the
graphql resolvers
should simply fetch the data; if the data is being processed, that logic should bemethodized
so that you canunit test
them; see theuserAPI.reduce*()
and its corresponding unit test for an example
- If this
-
unit.spec.js
should test individual methods of the class in thedatasource
.You should use
jest
tojest.spyOn()
andjest().fn().returnsMockValue()
to spy on how the function is used and to bypass any calls toexternal systems
, respectively; the point of unit testing is to test a small unit of code (like afunction
), and that it returns an expected value for a given input (if any).View more on
jest
in their documentation.- your test should
import
theAPI
that you are testing mock
out the method your are testing so that it returns the expectedmock data
(as defined in the index file)- verify that the now-stubbed method, when invoked, will return
your expected
mockResponses
This might look a little useless, but this workflow makes more sense when the units you are testing has multiple codepaths which are influenced by your input values. You should test each path for expected results and errors.
If a
function/method
does not have any logic in it, then you don't really need to unit test it. - your test should
-
int.spec.js
should test thedatasources
andresolvers
by way of using onlygraphql
queries and the API you've defined in thedatasource
. You may also test the functionality of the relevant1st party API
if you want, but you'll get essentually the same results if you test it through thegraphql query
, as theresolvers
are currently intended to only act as an interface to said APIYou should should:
- start a graphql server for testing (in code, see the
google calendar
API for an example) - fetch the API that you are going to test
- If you are using a
3rd party API
, stub the method in the API that you are testing to return pre-definedmock data
. If you are using a1st party API
, then you may opt to test using the database; that is up to you (preferred). - fetch the
mock graphql query
from your mock file - send the server a
graphql
query - verify the results match your expecte
mockResponses
- start a graphql server for testing (in code, see the
-
You may find it useful to create a
seeder
for this model. See theREADME
in theserver/db/
directory, and reference the seeders underserver/db/seeders
for examples on implementation
-
Remember, that once you fulfill the index.js
file for the resource, the
model
, typeDefs
, resolvers
, and datasource
are dynamically loaded into
the application, and you can immediately begin querying the new resource
after the server restarts.
In your browser, go to your app URL localhost:APP_PORT/graphql
to test the
graphql API.
If this is your first time setting up the environment, apply this fix to your GraphiQL Settings: #14
This is required in order to test the GraphQL Mutations that use the GoogleAPI;
you must auth through localhost:APP_PORT/auth/google
, and then subsequent
requests sent through the GraphiQL IDE will have your google auth tokens
attached to them.
NOTE: This does not apply to the testing environment because the google API
requests are mocked out. This should also not apply to production because
the session should be properly managed between the front-end and back-end
without the same-origin
issue described in the linked issue.
View the .travis.yml
file in the root of the project. Note that the test
command does not explicitly set the NODE_ENV
environment variable. This
must be explicitly set in the CI/CD
configuration (the aforementioned .yml
file). That way, when running the yarn run test:*
commands, the correct
environment is used.