-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Danibishop/longitude rework #32
Conversation
… the CARTO layer.
…ise and regular instances.
…fic response. Basic preview of returned values as table.
…w has the roadmap checklist.
…o separate samples.
…Redis for now). Password field added to Redis configuration, including associated error messages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I love the essence the library is getting, but it's too complex. A simple metric: for performing cached queries to CARTO the project has 38 files.
Also a general comment, we are trying to change the use of the src
folder, we should call it longitude
.
About the queries: we need to decide if we are going to use Pyscopg2 only or SQLAlchemy and check how to bindings are implemented for starting to use it with CARTO. The bindings in CARTO are going to take a lot, but the use of query
methods in this library needs to be the same for every DB.
Well, to perform cached queries a library user needs to import two modules (one for cache, one for data source). Then, from those modules it could be necessary to import configuration classes, but it is not mandatory. The 38 files (or so) are about structure, tests, sample proyects and other development stuff; if project size is a concern, we can deploy only the pure core scripts and those are around 10 or so.
For me this is sort of confusing as "longitude" is the root folder. What would be that use regarding
My plan is to do this totally transparent. I still have to decide if SQLAlchemy will be a configurable aspect of the Psycopg2 data source or if I will create two different data sources: one with, one withtout SQLAlchemy. Anyhow, the CARTO thing is irrelevant at this level as bindings are resolved inside each specific query method implementation. From the user perspective, it is irrelevant which data source is being used.: there is always a call to |
I don't find it confusing, but I understand. The awful thing is to see the imports with |
…figurable in a per-query basis using a bool parameter in the query() method.
I was thinking about publishing the |
…no key is provided. Create table from model in SQLAlchemy seems to work (wrapped in data source object)
…S/Longitude into danibishop/longitude-rework
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also about the src
folder, it's the moment to rename it. It's annoying to see some imports from src
. We are trying to fix that in other Geographica's projects. Also, some of the most used Python libraries use the name of the lib approach:
…S/Longitude into danibishop/longitude-rework
e590cf3
to
6566bb2
Compare
…erage script for longitude path.
Already done. |
…taframe read/write abstract methods
Nuevo config y subimos! |
EnvironmentConfiguration is now a domain class that exposes a single get(key) method. It will parse environment variables in the form LONGITUDE__PARENT_OBJECT__CHILD_OBJECT__VALUE=42 as {'parent_object': {'child_object': {'value': 42 } } It also allows to recover the values using nested keys ('.' is the joiner): Config.get('parent_object.child_object.value') returns 42 (as integer). Also, if a value can be parsed as integer, it will be parsed.
Updated PR from #29