diff --git a/contrib/gn_module_occhab/pyproject.toml b/contrib/gn_module_occhab/pyproject.toml new file mode 100644 index 0000000000..ad0679d20c --- /dev/null +++ b/contrib/gn_module_occhab/pyproject.toml @@ -0,0 +1,3 @@ +[build-system] +requires = ["setuptools >= 64"] +build-backend = "setuptools.build_meta" diff --git a/contrib/gn_module_validation/pyproject.toml b/contrib/gn_module_validation/pyproject.toml new file mode 100644 index 0000000000..ad0679d20c --- /dev/null +++ b/contrib/gn_module_validation/pyproject.toml @@ -0,0 +1,3 @@ +[build-system] +requires = ["setuptools >= 64"] +build-backend = "setuptools.build_meta" diff --git a/contrib/occtax/pyproject.toml b/contrib/occtax/pyproject.toml new file mode 100644 index 0000000000..ad0679d20c --- /dev/null +++ b/contrib/occtax/pyproject.toml @@ -0,0 +1,3 @@ +[build-system] +requires = ["setuptools >= 64"] +build-backend = "setuptools.build_meta" diff --git a/data/scripts/import_ginco/.gitignore b/data/scripts/import_ginco/.gitignore deleted file mode 100644 index b81c7954b7..0000000000 --- a/data/scripts/import_ginco/.gitignore +++ /dev/null @@ -1 +0,0 @@ -*.xml \ No newline at end of file diff --git a/data/scripts/import_ginco/README.rst b/data/scripts/import_ginco/README.rst deleted file mode 100644 index 121253d49e..0000000000 --- a/data/scripts/import_ginco/README.rst +++ /dev/null @@ -1,85 +0,0 @@ -Scripts de migration GINCO -> GeoNature -======================================= - - - -Scripts -******* - -Ce dossier comprend plusieurs scripts permettant d'effectuer la migration des données de GINCO vers GeoNature. - -* ``restore_ginco_db.sh`` : Ce script restaure une BDD GINCO à partir d'un DUMP SQL, puis créé un Foreign Data Wrapper (FDW) entre la base restaurée et la base GeoNature cible. Un nouveau schéma ``ginco_migration`` est créé, comportant les tables des schémas ``website`` et ``raw_data`` de la base GINCO source. -* Ginco est actuellement en version Taxref 12. GeoNature ne s'installe lui qu'avec la version 11 ou 13 du référentiel. Le script ``import_taxref/import_new_taxref_version.sh`` permet d'importer Taxref en version 12. Il doit être exécuté sur un GeoNature vierge de toute donnée pour ne pas créer de conflit d'integrité. -* ``insert_data.sh`` : Ce script vient lire dans le FDW précedemment créé pour insérer les données dans la synthèse de GeoNature. Remplacer la variable du fichier ``synthese.sql`` par le nom de la table comprenant les données d'occurrence de taxon de Ginco. -* ``import_mtd.sh`` : Script contenant un script python permettant de récupérer les cadres d'acquisition et les informations détaillées de chaque JDD présents dans la base GINCO à partir du web-service MTD. -* ``find_conflicts.sql`` Script permettant de remonter les erreurs d'intégrité des données sources (voir plus bas) - -Désampler le fichier ``settings.ini.sample``, le remplir puis lancer les scripts dans l'ordre décrit ci-dessous. Pour chaque script un fichier de log est créé dans le répertoire ``log``. - -Dans le fichier ``synthese.sql``, remplacer par le nom de la table contenant les données dans GINCO - -Quelles données sont rapatriées ? -********************************* - -- L'ensemble des organismes sont rapatriés depuis la table ``providers``. Les utilisateurs ne sont pas importés pour éviter les doublons lors de la connexion au CAS de l'INPN. -- Les jeux de données sont rapatriés depuis les tables ``jdd`` et ``jdd_fields`` vers la table ``gn_meta.t_datasets``. Les JDD tagués comme ``deleted`` ne sont pas importés. Le rattachement du JDD à son cadre d'acquisition ainsi que l'ensemble des informations liées aux métadonnées sont récupérées via le script ``import_mtd.sh``. Celui-ci insert également des organismes et des utilisateurs en récupérant les acteurs des JDD et CA. -- Les données d'occurrence de taxon présentes dans la table ``raw_data.`` sont rapatriées dans la table ``gn_synthese.synthese``. Les données ne possédant pas de géométrie, de cd_nom, appartenant à une JDD supprimés ou étant en doublon (UUID non unique dans la table source) ne sont pas intégrées. - -Erreurs d'integrité -******************* - -Le script SQL ``find_conflicts.sql`` permet de créer des tables faisant remonter les erreurs d'intégrité. - -En effet, durant le scripts d'insertion des données, plusieurs contraintes d'integrité ont été désactivées et certaines données exclues pour que l'insertion dans la table synthèse fonctionne. - -- La table ``ginco_migration.cd_nom_invalid`` liste toutes les données dont le cd_nom est absent de la table ``taxonomie.taxref`` (en version 12) -- La table ``ginco_migration.cd_nom_null`` liste les données dont le cd_nom est null. -- La table ``ginco_migration.date_invalid`` liste les données dont la date fin est supérieure à la date debut. -- La table ``ginco_migration.count_invalid`` liste les données dont le dénombrement max est superieur au dénombrement min. -- La table ``ginco_migration.doublons`` liste les données dont l'UUID n'est pas unique ainsi que leur nombre d'occurrence. - -Après avoir corrigé les données dans la table ``gn_synthese.synthese``, vous pouvez réactiver les contraintes suivantes : - -:: - - ALTER TABLE gn_synthese.synthese - ADD CONSTRAINT check_synthese_count_max CHECK (count_max >= count_min); - - ALTER TABLE gn_synthese.synthese - ADD CONSTRAINT check_synthese_date_max CHECK (date_max >= date_min); - - -Gestion des droits -******************* - -L'ensemble des permissions présentes dans GINCO ne sont pas encore existantes dans GeoNature (voir les données publiques, voir les données sensibles etc...). - -Dans l'attente de ces évolutions, deux groupes ont été créés (reprenant des groupes existants dans GINCO) : - -- Un groupe "Administrateur" : - - - Il possède le CRUVED suivant sur tous les modules : C:3 R:3 U:3 V:3 E:3 D:3 - - Il a accès aux modules Occtax, Occhab, Metadata, Admin, Synthese, Validation - -- Un groupe "Producteur" : - - - Il a accès aux modules Synthese / Occtax / Occhab avec le CRUVED suivant : C:3 R:2 U:1 V:0 E:2 D:1 - - Metadonnées : C:0 R:2 - - Pas d'accès : Validation, Admin - -Les personnes du groupe 'Administateur' ont aussi accès à UsersHub et TaxHub avec un profil 'administateur'. - -Après sa première connexion au CAS, l'administrateur devra se connecter à UsersHub pour s'ajouter au groupe 'Administrateur'. - -Connexion au CAS INPN -********************* - -Le paramètre ``BDD.ID_USER_SOCLE_1`` contrôle le groupe (et donc les droits) de toute nouvelle personne se connectant à la plateforme via le CAS INPN. - -Mettre l'id du groupe producteur auquel on a affecté des droits (voir plus haut). - -Autre configuration -==================== -- Carto -- Limiter le nombre d'observation dans le module validation -- Monter le timeout gunicorn diff --git a/data/scripts/import_ginco/check_data.sql b/data/scripts/import_ginco/check_data.sql deleted file mode 100644 index 11bb1c9e96..0000000000 --- a/data/scripts/import_ginco/check_data.sql +++ /dev/null @@ -1,58 +0,0 @@ --- Verification des données après la migration - --- Nombre de données total d'occurrences de taxon dans GINCO (table ginco_migration.model_1_observation) -select count(*) -from ginco_migration.model_1_observation - --- Nombre de données dont les JDD sont supprimés - -select count(*) - FROM ginco_migration.model_1_observation m - WHERE m.jddmetadonneedeeid::text IN ( SELECT f.value_string - FROM ginco_migration.jdd j - JOIN ginco_migration.jdd_field f ON f.jdd_id = j.id - WHERE j.status = 'deleted'::text AND f.key::text = 'metadataId'::text) - --- Nombre de données sans géométrie - -select count(*) - FROM ginco_migration.model_1_observation m - WHERE m.geometrie is null; - --- Nombre de données actuellement dans la vue materialisée --- (utilisée pour inserer dans la synthese), ou on a enlevé les données supprimées et les données sans geom - --- Nombre de données en doublon - --- Nombre de cd_nom null - -select count(*) -from ginco_migration.cd_nom_null - --- Nombre de cd_nom invalide - -select * from ginco_migration.cd_nom_invalid - --- Dénombrement invalide (nb_min > nb_max) - -select * from ginco_migration.count_invalid - --- Nombre de date invalide (date_min > date_max) - -select * from ginco_migration.date_invalid - --- Nombre de données dans la synthese - -select * from gn_synthese.synthese - --- Nombre de JDD dans la base Ginco - - SELECT count(*) - FROM ginco_migration.jdd j - JOIN ginco_migration.jdd_field f ON f.jdd_id = j.id - WHERE j.status != 'deleted'::text AND f.key::text = 'metadataId'::text - --- Nombre de JDD dans GeoNature - - select count(*) - from gn_meta.t_datasets diff --git a/data/scripts/import_ginco/find_conflicts.sql b/data/scripts/import_ginco/find_conflicts.sql deleted file mode 100644 index da4122f87b..0000000000 --- a/data/scripts/import_ginco/find_conflicts.sql +++ /dev/null @@ -1,40 +0,0 @@ -DROP TABLE IF EXISTS ginco_migration.cd_nom_invalid; -CREATE TABLE ginco_migration.cd_nom_invalid AS ( - select m.identifiantpermanent, m.id cdnom - from ginco_migration.vm_data_model_source m - left join taxonomie.taxref t on t.cd_nom = m.cdnom::integer - where cd_nom is null -); - -DROP TABLE IF EXISTS ginco_migration.cd_nom_null; -CREATE TABLE ginco_migration.cd_nom_null AS ( - SELECT identifiantpermanent - FROM ginco_migration.vm_data_model_source - WHERE cdnom IS NULL -); - -DROP TABLE IF EXISTS ginco_migration.date_invalid; -CREATE TABLE ginco_migration.date_invalid AS ( - SELECT unique_id_sinp - FROM gn_synthese.synthese - WHERE date_max < date_min -); - -DROP TABLE IF EXISTS ginco_migration.count_invalid; -CREATE TABLE ginco_migration.count_invalid AS ( - SELECT unique_id_sinp - FROM gn_synthese.synthese - WHERE count_max < count_min -); - -DROP TABLE IF EXISTS ginco_migration.doublons; -CREATE TABLE ginco_migration.doublons( - nb_doublons integer, - uuid_doublon character varying -); --- repérer les doublons: -INSERT INTO ginco_migration.doublons -SELECT count(*) as nb_doubl, identifiantpermanent -FROM ginco_migration.vm_data_model_source -GROUP BY identifiantpermanent -HAVING count(*) > 1; diff --git a/data/scripts/import_ginco/import_ca.py b/data/scripts/import_ginco/import_ca.py deleted file mode 100644 index c3c8212139..0000000000 --- a/data/scripts/import_ginco/import_ca.py +++ /dev/null @@ -1,339 +0,0 @@ -""" - Script importing metadata in GeoNature DataBase based on uuid of datasets to import - Use the inpn webservice to get corresponding xml files. Works with datasets and acquisition frameworks, not yet with parents acquisition frameworks (to do) -""" - -import os -import datetime -import xml.etree.ElementTree as ET - -import requests -import psycopg2 - - -""" -CONFIG -""" -SQLALCHEMY_DATABASE_URI = "postgresql://{user}:{password}@{host}:{port}/{database}".format( - user=os.environ["geonature_pg_user"], - password=os.environ["geonature_user_pg_pass"], - host=os.environ["db_host"], - port=os.environ["db_port"], - database=os.environ["geonature_db_name"], -) -TABLE_DONNEES_INPN = os.environ["TABLE_DONNEES_INPN"] -CHAMP_ID_JDD = os.environ["CHAMP_ID_JDD"] -DELETE_XML_FILE_AFTER_IMPORT = os.environ["DELETE_XML_FILE_AFTER_IMPORT"] - - -# Connecting to DB and openning a cursor -try: - conn = psycopg2.connect(SQLALCHEMY_DATABASE_URI) -except Exception as e: - print("Connexion à la base impossible") - -cursor = conn.cursor() - - -""" -Constants -""" - -# Namespaces for metadata XML files -xml_namespaces = { - "gml": "http://www.opengis.net/gml/3.2", - "ca": "http://inpn.mnhn.fr/mtd", - "jdd": "http://inpn.mnhn.fr/mtd", - "xlink": "http://www.w3.org/1999/xlink", - "xsi": "http://www.w3.org/2001/XMLSchema-instance", -} - -# Paths to different king of informations in XML files -af_main = "gml:featureMember/ca:CadreAcquisition/ca:" -af_temp_ref = "gml:featureMember/ca:CadreAcquisition/ca:ReferenceTemporelle/ca:" -af_main_actor = "gml:featureMember/ca:CadreAcquisition/ca:acteurPrincipal/ca:ActeurType/ca:" -ds_main = "gml:featureMember/jdd:JeuDeDonnees/jdd:" -ds_bbox = "gml:featureMember/jdd:JeuDeDonnees/jdd:empriseGeographique/jdd:BoundingBox/jdd:" -ds_contact_pf = "gml:featureMember/jdd:JeuDeDonnees/jdd:pointContactPF/jdd:ActeurType/jdd:" - -""" -Parsing functions - 3 distinct functions used to get 3 kinds of data, parsing XML Files : - - single data under file root or non-repeatable node, (dataset and acquisition framework name, bbox...) - - tuple data under file root, (territories, keywords...) - - single data under repeatable nodes, themself under file root (publications, actors...) -""" - - -def get_single_data(node, path, tag): - # path = af_main & tags = ['identifiantCadre','libelle','description','estMetaCadre','typeFinancement','niveauTerritorial','precisionGeographique','cibleEcologiqueOuGeologique','descriptionCible','dateCreationMtd','dateMiseAJourMtd'] - # path = af_temp_ref & tags = ['dateLancement','dateCloture'] - # path = af_main_actor & tags = ['mail','nomPrenom','roleActeur','organisme','idOrganisme'] - # path = ds_main & tags = ['identifiantJdd','identifiantCadre','libelle','libelleCourt','description','typeDonnees','objectifJdd','domaineMarin','domaineTerrestre','dateCreation','dateRevision'] - # path = ds_bbox & tags = ['borneNord','borneSud','borneEst','borneOuest'] - # path = ds_contact_pf & tags = ['mail','nomPrenom','roleActeur','organisme','idOrganisme'] - try: - data = node.find(path + tag, namespaces=xml_namespaces).text - if data != None: - return data - else: - return "" - except Exception as e: - return "" - - -def get_tuple_data(node, path, tag): - # path = af_main & tags = ['motCle','objectifCadre','voletSINP','territoire'] - # path = ds_main & tags = ['motCle','territoire'] - data = [] - try: - datas = CURRENT_XML.findall(path + tag, namespaces=xml_namespaces) - if datas == []: - return "" - else: - for row in datas: - data.append(str(row.text)) - return data - except Exception as e: - return "" - - -def get_inner_data(object, iter, tag): - # Object = af_publications, iter = cur_publi, tags = ['referencePublication','URLPublication'] - # Object = ds_protocols, iter = cur_proto, tags = ['libelleProtocole','descriptionProtocole','url'] - # Object = ds_pointscontacts, iter = point_contact, - # Object = af_othersactors, iter = other_actor, tags = ['nomPrenom', 'mail', 'roleActeur', 'organisme', 'idOrganisme'] - try: - cur_data = object[iter].find("ca:" + tag, xml_namespaces).text - if cur_data != "": - return cur_data - else: - return "" - except Exception as e: - return "" - - -""" -Datatype protocols - Only protocols with a name are considered. Protocol name is the reference used as "key" for import - WARNING : on 490 tested datasets, no one had protocol name stored in xml files. So, no protocols have been created... (only url most of time) -""" - -""" -Acquisition frameworks -""" - - -# Check existing to avoid duplicates -def get_known_af(): - cursor.execute( - "SELECT DISTINCT unique_acquisition_framework_id FROM gn_meta.t_acquisition_frameworks" - ) - results = cursor.fetchall() - return [r[0].upper() for r in results] - - -def insert_update_t_acquisition_frameworks(CURRENT_AF_ROOT, action, cur_af_uuid): - identifiantCadre = cur_af_uuid - libelle = get_single_data(CURRENT_AF_ROOT, af_main, "libelle") - description = get_single_data(CURRENT_AF_ROOT, af_main, "description") - - # dateLancement : DEFAULT='01/01/1800' - if get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateLancement") == "": - dateLancement = datetime.datetime.now() - else: - dateLancement = get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateLancement") - # dateCreationMtd - if get_single_data(CURRENT_AF_ROOT, af_main, "dateCreationMtd") == "": - dateCreationMtd = datetime.datetime.now() - else: - dateCreationMtd = get_single_data(CURRENT_AF_ROOT, af_main, "dateCreationMtd") - # dateMiseAJourMtd - if get_single_data(CURRENT_AF_ROOT, af_main, "dateMiseAJourMtd") == "": - dateMiseAJourMtd = datetime.datetime.now() - else: - dateMiseAJourMtd = get_single_data(CURRENT_AF_ROOT, af_main, "dateMiseAJourMtd") - # dateCloture - if get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateCloture") == "": - dateCloture = datetime.datetime.now() - else: - dateCloture = get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateCloture") - # Write and run query - if action == "create": - cur_query = """ - INSERT INTO gn_meta.t_acquisition_frameworks( - unique_acquisition_framework_id, - acquisition_framework_name, - acquisition_framework_desc, - acquisition_framework_start_date, - acquisition_framework_end_date, - meta_create_date, - meta_update_date, - opened - ) - VALUES ( - %s , - %s , - %s , - %s , - %s , - %s , - %s , - %s - ) RETURNING id_acquisition_framework; - """ - result = "New acquisition framework created..." - cursor.execute( - cur_query, - ( - identifiantCadre, - libelle[0:254], - description, - dateLancement, - dateCloture, - dateCreationMtd, - dateMiseAJourMtd, - False, - ), - ) - elif action == "update": - cur_query = """ - UPDATE gn_meta.t_acquisition_frameworks SET - acquisition_framework_name= %s, - acquisition_framework_desc= %s, - acquisition_framework_start_date= %s, - acquisition_framework_end_date= %s, - meta_create_date= %s, - meta_update_date= %s, - opened= %s - WHERE unique_acquisition_framework_id= %s - RETURNING id_acquisition_framework; - """ - result = "Existing acquisition framework updated..." - cursor.execute( - cur_query, - ( - identifiantCadre, - libelle[0:254], - description, - dateLancement, - dateCloture, - dateCreationMtd, - dateMiseAJourMtd, - False, - cur_af_uuid, - ), - ) - r = cursor.fetchone() - created_or_returned_id = None - if r: - created_or_returned_id = r[0] - conn.commit() - return created_or_returned_id - - -""" - Datasets -""" - - -def get_known_ds(): - cursor.execute("SELECT DISTINCT unique_dataset_id FROM gn_meta.t_datasets") - results = str(cursor.fetchall()) - known = ( - results.replace("(", "") - .replace(")", "") - .replace("[", "") - .replace("]", "") - .replace(",", "") - .replace("'", "") - .split(" ") - ) - return known - - -""" - Getting XML Files & pushing Acquisition Frameworks data in GeoNature DataBase -""" - - -def insert_CA(cur_af_uuid): - """ - insert a CA and return the created ID - """ - if cur_af_uuid[1:-1] in get_known_af(): - action = "update" - else: - action = "create" - # Get and parse corresponding XML File - # remove '' - cur_af_uuid = cur_af_uuid.upper() - af_URL = "https://inpn.mnhn.fr/mtd/cadre/export/xml/GetRecordById?id={}".format(cur_af_uuid) - request = requests.get(af_URL) - if request.status_code == 200: - open("{}.xml".format(cur_af_uuid), "wb").write(request.content) - CURRENT_AF_ROOT = ET.parse("{}.xml".format(cur_af_uuid)).getroot() - # Feed t_acquisition_frameworks - af_id = insert_update_t_acquisition_frameworks(CURRENT_AF_ROOT, action, cur_af_uuid) - # Feed cor_acquisition_framework_voletsinp - - # Delete files if choosen - if DELETE_XML_FILE_AFTER_IMPORT == "True": - os.remove("{}.xml".format(cur_af_uuid)) - return af_id - else: - print("CA NOT FOUND: " + cur_af_uuid) - return None - - -# Parse and import data in GeoNature database - -""" - Getting XML Files & pushing Datasets data in GeoNature DataBase -""" -# Getting uuid list of JDD to import - -q = "SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE acquisition_framework_name ILIKE 'CA provisoire - import Ginco -> GeoNature'" -cursor.execute(q) -old_id_af = cursor.fetchone()[0] - - -cursor.execute( - "SELECT unique_dataset_id FROM gn_meta.t_datasets WHERE id_acquisition_framework =" - + str(old_id_af) -) -ds_uuid_list = cursor.fetchall() - - -for ds_iter in range(len(ds_uuid_list)): - cur_ds_uuid = ds_uuid_list[ds_iter][0] - if cur_ds_uuid not in get_known_ds(): - action = "create" - else: - action = "update" - # Get and parse corresponding XML File - ds_URL = "https://inpn.mnhn.fr/mtd/cadre/jdd/export/xml/GetRecordById?id={}".format( - cur_ds_uuid.upper() - ) - req = requests.get(ds_URL) - if req.status_code == 200: - print(cur_ds_uuid + " found") - open("{}.xml".format(cur_ds_uuid), "wb").write(requests.get(ds_URL).content) - CURRENT_DS_ROOT = ET.parse("{}.xml".format(cur_ds_uuid)).getroot() - # insertion des CA - current_af_uuid = get_single_data(CURRENT_DS_ROOT, ds_main, "identifiantCadre") - current_id_ca = insert_CA(current_af_uuid) - if current_id_ca: - # Feed t_datasets - query_update_ds = f""" - UPDATE gn_meta.t_datasets - SET id_acquisition_framework = {current_id_ca} - WHERE unique_dataset_id = '{cur_ds_uuid}' - """ - cursor.execute(query_update_ds) - conn.commit() - print("UPDATE JDD") - if DELETE_XML_FILE_AFTER_IMPORT == "True": - os.remove("{}.xml".format(cur_ds_uuid)) - else: - print(f"{cur_ds_uuid} not found") diff --git a/data/scripts/import_ginco/import_ca.sh b/data/scripts/import_ginco/import_ca.sh deleted file mode 100644 index 95c817aad8..0000000000 --- a/data/scripts/import_ginco/import_ca.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -. settings.ini -# export all variable in settings.ini -# -> they are available in python os.environ -export $(grep -i --regexp ^[a-z] settings.ini | cut -d= -f1) -export TABLE_DONNEES_INPN CHAMP_ID_CA CHAMP_ID_JDD DELETE_XML_FILE_AFTER_IMPORT - -sudo apt-get install virtualenv - -if [ -d 'venv/' ] -then - echo "Suppression du virtual env existant..." - sudo rm -rf venv -fi - -virtualenv -p /usr/bin/python3 venv -source venv/bin/activate -pip install psycopg2 requests - - -python3 import_ca.py diff --git a/data/scripts/import_ginco/import_mtd.py b/data/scripts/import_ginco/import_mtd.py deleted file mode 100644 index 268033eaed..0000000000 --- a/data/scripts/import_ginco/import_mtd.py +++ /dev/null @@ -1,1202 +0,0 @@ -""" - Script importing metadata in GeoNature DataBase based on uuid of datasets to import - Use the inpn webservice to get corresponding xml files. Works with datasets and acquisition frameworks, not yet with parents acquisition frameworks (to do) -""" - -import os -import xml.etree.ElementTree as ET - -import requests -import psycopg2 - - -""" -CONFIG -""" -SQLALCHEMY_DATABASE_URI = "postgresql://{user}:{password}@{host}:{port}/{database}".format( - user=os.environ["geonature_pg_user"], - password=os.environ["geonature_user_pg_pass"], - host=os.environ["db_host"], - port=os.environ["db_port"], - database=os.environ["geonature_db_name"], -) -TABLE_DONNEES_INPN = os.environ["TABLE_DONNEES_INPN"] -CHAMP_ID_JDD = os.environ["CHAMP_ID_JDD"] -DELETE_XML_FILE_AFTER_IMPORT = os.environ["DELETE_XML_FILE_AFTER_IMPORT"] - - -# Connecting to DB and openning a cursor -try: - conn = psycopg2.connect(SQLALCHEMY_DATABASE_URI) -except Exception as e: - print("Connexion à la base impossible") - -cursor = conn.cursor() - - -""" -Constants -""" - -# Namespaces for metadata XML files -xml_namespaces = { - "gml": "http://www.opengis.net/gml/3.2", - "ca": "http://inpn.mnhn.fr/mtd", - "jdd": "http://inpn.mnhn.fr/mtd", - "xlink": "http://www.w3.org/1999/xlink", - "xsi": "http://www.w3.org/2001/XMLSchema-instance", -} - -# Paths to different king of informations in XML files -af_main = "gml:featureMember/ca:CadreAcquisition/ca:" -af_temp_ref = "gml:featureMember/ca:CadreAcquisition/ca:ReferenceTemporelle/ca:" -af_main_actor = "gml:featureMember/ca:CadreAcquisition/ca:acteurPrincipal/ca:ActeurType/ca:" -ds_main = "gml:featureMember/jdd:JeuDeDonnees/jdd:" -ds_bbox = "gml:featureMember/jdd:JeuDeDonnees/jdd:empriseGeographique/jdd:BoundingBox/jdd:" -ds_contact_pf = "gml:featureMember/jdd:JeuDeDonnees/jdd:pointContactPF/jdd:ActeurType/jdd:" - -""" -Parsing functions - 3 distinct functions used to get 3 kinds of data, parsing XML Files : - - single data under file root or non-repeatable node, (dataset and acquisition framework name, bbox...) - - tuple data under file root, (territories, keywords...) - - single data under repeatable nodes, themself under file root (publications, actors...) -""" - - -def get_single_data(node, path, tag): - # path = af_main & tags = ['identifiantCadre','libelle','description','estMetaCadre','typeFinancement','niveauTerritorial','precisionGeographique','cibleEcologiqueOuGeologique','descriptionCible','dateCreationMtd','dateMiseAJourMtd'] - # path = af_temp_ref & tags = ['dateLancement','dateCloture'] - # path = af_main_actor & tags = ['mail','nomPrenom','roleActeur','organisme','idOrganisme'] - # path = ds_main & tags = ['identifiantJdd','identifiantCadre','libelle','libelleCourt','description','typeDonnees','objectifJdd','domaineMarin','domaineTerrestre','dateCreation','dateRevision'] - # path = ds_bbox & tags = ['borneNord','borneSud','borneEst','borneOuest'] - # path = ds_contact_pf & tags = ['mail','nomPrenom','roleActeur','organisme','idOrganisme'] - try: - data = ( - node.find(path + tag, namespaces=xml_namespaces) - .text.replace("'", "''") - .replace("’", "''") - .replace('"', "") - .replace("\u202f", " ") - ) - if data != None: - return str("'" + data + "'") - else: - return str("''") - except Exception as e: - return str("''") - - -def get_tuple_data(node, path, tag): - # path = af_main & tags = ['motCle','objectifCadre','voletSINP','territoire'] - # path = ds_main & tags = ['motCle','territoire'] - data = [] - try: - datas = CURRENT_XML.findall(path + tag, namespaces=xml_namespaces) - if datas == []: - return str("''") - else: - for row in datas: - data.append( - str( - "'" - + row.text.replace("'", "''") - .replace("’", "''") - .replace('"', "") - .replace("\u202f", " ") - + "'" - ) - ) - return data - except Exception as e: - return str("''") - - -def get_inner_data(object, iter, tag): - # Object = af_publications, iter = cur_publi, tags = ['referencePublication','URLPublication'] - # Object = ds_protocols, iter = cur_proto, tags = ['libelleProtocole','descriptionProtocole','url'] - # Object = ds_pointscontacts, iter = point_contact, - # Object = af_othersactors, iter = other_actor, tags = ['nomPrenom', 'mail', 'roleActeur', 'organisme', 'idOrganisme'] - try: - cur_data = ( - object[iter] - .find("ca:" + tag, xml_namespaces) - .text.replace("'", "''") - .replace("’", "''") - .replace('"', "") - .replace("\u202f", " ") - ) - if cur_data != "": - return "'" + cur_data + "'" - else: - return str("''") - except Exception as e: - return str("''") - - -""" -Datatype protocols - Only protocols with a name are considered. Protocol name is the reference used as "key" for import - WARNING : on 490 tested datasets, no one had protocol name stored in xml files. So, no protocols have been created... (only url most of time) -""" - - -def get_known_protocols(): - protocols = [] - cursor.execute("SELECT DISTINCT protocol_name FROM gn_meta.sinp_datatype_protocols") - protos = cursor.fetchall() - for proto in protos: - protocols.append(str(proto).replace('"', "'")) - return protocols - - -def insert_sinp_datatype_protocols(cur_protocol_name, cur_protocol_desc, cur_protocol_url): - # Protocol type not found in XML files, by default 'inconnu' - query = f""" - INSERT INTO gn_meta.sinp_datatype_protocols (protocol_name,protocol_desc,id_nomenclature_protocol_type,protocol_url) - VALUES ({cur_protocol_name}, {cur_protocol_desc} , (SELECT(ref_nomenclatures.get_id_nomenclature('TYPE_PROTOCOLE', '0'))), {cur_protocol_url}) - """ - cursor.execute(query) - conn.commit() - print("New protocol imported") - - -def update_sinp_datatype_protocols(cur_protocol_name, cur_protocol_desc, cur_protocol_url): - # Protocol type not found in XML files, by default 'inconnu' - query = f""" - UPDATE gn_meta.sinp_datatype_protocols SET protocol_desc={cur_protocol_desc}, - id_nomenclature_protocol_type=ref_nomenclatures.get_id_nomenclature('TYPE_PROTOCOLE', '0'), - protocol_url={cur_protocol_url} - WHERE protocol_name={cur_protocol_name} - """ - cursor.execute(query) - conn.commit() - print("Existing protocol updated") - - -""" -Datatype publications - Only publications with a name are considered. Publication name is the reference used as "key" for import -""" - - -def get_known_publications(): - publications = [] - cursor.execute("SELECT DISTINCT publication_reference FROM gn_meta.sinp_datatype_publications") - pubs = cursor.fetchall() - for pub in pubs: - publications.append(str(pub).replace('"', "'")) - return publications - - -def insert_sinp_datatype_publications(cur_publication, cur_url): - create_publication = ( - "INSERT INTO gn_meta.sinp_datatype_publications (publication_reference,publication_url)" - + " VALUES (" - + cur_publication - + ", " - + cur_url - + ")" - ) - cursor.execute(create_publication) - conn.commit() - print("New publication created...") - - -def update_sinp_datatype_publications(cur_publication, cur_url): - update_publication = ( - "UPDATE gn_meta.sinp_datatype_publications SET publication_url=" - + cur_url - + " WHERE publication_reference=" - + cur_publication - ) - cursor.execute(update_publication) - conn.commit() - print("Existing publication updated...") - - -""" -Actors : organisms (bib_organismes) and persons (t_roles) -""" - - -# Organisms -def get_known_organisms(): - cursor.execute("SELECT DISTINCT uuid_organisme FROM utilisateurs.bib_organismes") - results = str(cursor.fetchall()) - known = ( - results.replace("(", "") - .replace(")", "") - .replace("[", "") - .replace("]", "") - .replace(",", "") - .replace("'", "") - .split(" ") - ) - return known - - -def insert_organism(cur_organism_uuid, cur_organism_name): - create_organism = ( - "INSERT INTO utilisateurs.bib_organismes (uuid_organisme,nom_organisme) VALUES (" - + cur_organism_uuid - + ", " - + cur_organism_name - + ")" - ) - cursor.execute(create_organism) - conn.commit() - print("New organism created...") - - -def update_organism(cur_organism_uuid, cur_organism_name): - update_organism = ( - "UPDATE utilisateurs.bib_organismes SET uuid_organisme=" - + cur_organism_uuid - + ", nom_organisme=" - + cur_organism_name - + " WHERE uuid_organisme=" - + str.lower(cur_organism_uuid) - ) - cursor.execute(update_organism) - conn.commit() - print("Existing organism updated...") - - -# Persons -def get_known_persons(): - cursor.execute( - "SELECT DISTINCT nom_role||(CASE WHEN prenom_role='' THEN '' ELSE ' '||prenom_role END) FROM utilisateurs.t_roles WHERE groupe='False'" - ) - return str(cursor.fetchall()).replace('"', "'") - - -def insert_person(cur_person_name, cur_person_mail): - if len(cur_person_name.rsplit(" ", 1)) == 2: - role_name = cur_person_name.replace("'", "").rsplit(" ", 1)[0] - role_firstname = cur_person_name.replace("'", "").rsplit(" ", 1)[1] - else: - role_name = cur_person_name.replace("'", "") - role_firstname = "" - create_role = """ - INSERT INTO utilisateurs.t_roles (nom_role, prenom_role, email) - VALUES ( - '{role_name}', - '{role_firstname}', - '{cur_person_mail}' - ) - """.format( - role_name=role_name, - role_firstname=role_firstname, - cur_person_mail=cur_person_mail.replace("'", ""), - ) - cursor.execute(create_role) - conn.commit() - print("New person created...") - - -def update_person(cur_person_name, cur_person_mail): - if len(cur_person_name.rsplit(" ", 1)) == 2: - role_name = cur_person_name.replace("'", "").rsplit(" ", 1)[0] - role_firstname = cur_person_name.replace("'", "").rsplit(" ", 1)[1] - else: - role_name = cur_person_name.replace("'", "") - role_firstname = "" - update_role = """ - UPDATE utilisateurs.t_roles - SET - nom_role='{role_name}', - prenom_role='{role_firstname}', - email='{cur_person_mail}', - WHERE nom_role||( - CASE WHEN prenom_role=\'\' THEN \'\' ELSE \' \'||prenom_role END) - = '{cur_person_name}' - """.format( - role_name=role_name, - role_firstname=role_firstname, - cur_person_mail=cur_person_mail.replace("'", ""), - cur_person_name=cur_person_name, - ) - cursor.execute(update_role) - conn.commit() - print("Existing person updated...") - - -""" -Acquisition frameworks -""" - - -# Check existing to avoid duplicates -def get_known_af(): - cursor.execute( - "SELECT DISTINCT unique_acquisition_framework_id FROM gn_meta.t_acquisition_frameworks" - ) - results = cursor.fetchall() - return [r[0].upper() for r in results] - # known = results.replace("(","").replace(")","").replace("[","").replace("]","").replace(",","").replace("'","").split(" ") - # return(known) - - -def insert_update_t_acquisition_frameworks(CURRENT_AF_ROOT, action, cur_af_uuid): - identifiantCadre = cur_af_uuid - libelle = get_single_data(CURRENT_AF_ROOT, af_main, "libelle") - description = get_single_data(CURRENT_AF_ROOT, af_main, "description") - motCle = ( - "'" - + str(get_tuple_data(CURRENT_AF_ROOT, af_main, "motCle")) - .replace("'", "") - .replace("[", "") - .replace("]", "") - .replace('"', "") - + "'" - ) - descriptionCible = get_single_data(CURRENT_AF_ROOT, af_main, "descriptionCible") - cibleEcologiqueOuGeologique = get_single_data( - CURRENT_AF_ROOT, af_main, "cibleEcologiqueOuGeologique" - ) - precisionGeographique = get_single_data(CURRENT_AF_ROOT, af_main, "precisionGeographique") - # - # territorial level : DEFAULT='National' - if get_single_data(CURRENT_AF_ROOT, af_main, "niveauTerritorial") == "''": - id_niveauTerritorial = ( - "(SELECT n.id_nomenclature FROM ref_nomenclatures.t_nomenclatures n," - + "ref_nomenclatures.bib_nomenclatures_types t WHERE t.id_type=n.id_type AND t.mnemonique='NIVEAU_TERRITORIAL' AND" - + " cd_nomenclature='3')::integer" - ) - else: - id_niveauTerritorial = ( - "(SELECT n.id_nomenclature FROM ref_nomenclatures.t_nomenclatures n," - + "ref_nomenclatures.bib_nomenclatures_types t WHERE t.id_type=n.id_type AND t.mnemonique='NIVEAU_TERRITORIAL' AND" - + " cd_nomenclature=" - + get_single_data(CURRENT_AF_ROOT, af_main, "niveauTerritorial") - + ")::integer" - ) - # Financing Type : DEFAULT="Publique" - if get_single_data(CURRENT_AF_ROOT, af_main, "typeFinancement") == "''": - id_typeFinancement = ( - "(SELECT n.id_nomenclature FROM ref_nomenclatures.t_nomenclatures n, ref_nomenclatures.bib_nomenclatures_types t WHERE" - + " t.id_type=n.id_type AND t.mnemonique='TYPE_FINANCEMENT' AND cd_nomenclature='1')::integer" - ) - else: - id_typeFinancement = ( - "(SELECT n.id_nomenclature FROM ref_nomenclatures.t_nomenclatures n, ref_nomenclatures.bib_nomenclatures_types t WHERE" - + " t.id_type=n.id_type AND t.mnemonique='TYPE_FINANCEMENT' AND cd_nomenclature=" - + get_single_data(CURRENT_AF_ROOT, af_main, "typeFinancement") - + ")::integer" - ) - # estMetaCadre : DEFAULT=False - if get_single_data(CURRENT_AF_ROOT, af_main, "estMetaCadre") == "''": - estMetaCadre = "false" - else: - estMetaCadre = get_single_data(CURRENT_AF_ROOT, af_main, "estMetaCadre") - # dateLancement : DEFAULT='01/01/1800' - if get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateLancement") == "''": - dateLancement = "(SELECT '01/01/1800'::timestamp without time zone)" - else: - dateLancement = ( - get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateLancement") - + "::timestamp without time zone" - ) - # dateCloture - if get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateCloture") == "''": - dateCloture = "NULL" - else: - dateCloture = ( - get_single_data(CURRENT_AF_ROOT, af_temp_ref, "dateCloture") - + "::timestamp without time zone" - ) - # dateCreationMtd - if get_single_data(CURRENT_AF_ROOT, af_main, "dateCreationMtd") == "''": - dateCreationMtd = "NULL" - else: - dateCreationMtd = ( - get_single_data(CURRENT_AF_ROOT, af_main, "dateCreationMtd") - + "::timestamp without time zone" - ) - # dateMiseAJourMtd - if get_single_data(CURRENT_AF_ROOT, af_main, "dateMiseAJourMtd") == "''": - dateMiseAJourMtd = "NULL" - else: - dateMiseAJourMtd = ( - get_single_data(CURRENT_AF_ROOT, af_main, "dateMiseAJourMtd") - + "::timestamp without time zone" - ) - # Write and run query - if action == "create": - cur_query = """ - INSERT INTO gn_meta.t_acquisition_frameworks( - unique_acquisition_framework_id, - acquisition_framework_name, - acquisition_framework_desc, - id_nomenclature_territorial_level, - territory_desc, - keywords, - id_nomenclature_financing_type, - target_description, - ecologic_or_geologic_target, - is_parent, - acquisition_framework_start_date, - acquisition_framework_end_date, - meta_create_date, - meta_update_date - ) - VALUES ( - '{identifiantCadre}', - {libelle}, - {description}, - {id_niveauTerritorial}, - {precisionGeographique}, - {motCle}, - {id_typeFinancement}, - {descriptionCible}, - {cibleEcologiqueOuGeologique}, - {estMetaCadre}, - {dateLancement}, - {dateCloture}, - {dateCreationMtd}, - {dateMiseAJourMtd} - ) RETURNING id_acquisition_framework; - """.format( - identifiantCadre=identifiantCadre, - libelle=libelle[0:254], - description=description, - id_niveauTerritorial=id_niveauTerritorial, - precisionGeographique=precisionGeographique, - motCle=motCle, - id_typeFinancement=id_typeFinancement, - descriptionCible=descriptionCible, - cibleEcologiqueOuGeologique=cibleEcologiqueOuGeologique, - estMetaCadre=estMetaCadre, - dateLancement=dateLancement, - dateCloture=dateCloture, - dateCreationMtd=dateCreationMtd, - dateMiseAJourMtd=dateMiseAJourMtd, - ) - result = "New acquisition framework created..." - elif action == "update": - cur_query = """ - UPDATE gn_meta.t_acquisition_frameworks SET - acquisition_framework_name={libelle}, - acquisition_framework_desc={description}, - id_nomenclature_territorial_level={id_niveauTerritorial}, - territory_desc={precisionGeographique}, - keywords={motCle}, - id_nomenclature_financing_type={id_typeFinancement}, - target_description={descriptionCible}, - ecologic_or_geologic_target={cibleEcologiqueOuGeologique}, - is_parent={estMetaCadre}, - acquisition_framework_start_date={dateLancement}, - acquisition_framework_end_date={dateCloture}, - meta_create_date={dateCreationMtd}, - meta_update_date={dateMiseAJourMtd} - WHERE unique_acquisition_framework_id='{cur_af_uuid}' - RETURNING id_acquisition_framework; - """.format( - libelle=libelle[0:254], - description=description, - id_niveauTerritorial=id_niveauTerritorial, - precisionGeographique=precisionGeographique, - motCle=motCle, - id_typeFinancement=id_typeFinancement, - descriptionCible=descriptionCible, - cibleEcologiqueOuGeologique=cibleEcologiqueOuGeologique, - estMetaCadre=estMetaCadre, - dateLancement=dateLancement, - dateCloture=dateCloture, - dateCreationMtd=dateCreationMtd, - dateMiseAJourMtd=dateMiseAJourMtd, - cur_af_uuid=cur_af_uuid, - ) - result = "Existing acquisition framework updated..." - cursor.execute(cur_query) - r = cursor.fetchone() - created_or_returned_id = None - if r: - created_or_returned_id = r[0] - conn.commit() - return created_or_returned_id - - -# Functions deleting existing cor before create or update cor tables -def delete_cor_af(table, cur_af_uuid): - # tables : [voletsinp, objectif, territory, publication, actor] - cur_delete_query = """ - DELETE FROM gn_meta.cor_acquisition_framework_{table} - WHERE id_acquisition_framework=( - SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks - WHERE unique_acquisition_framework_id='{cur_af_uuid}' - ) - """.format( - table=table, cur_af_uuid=cur_af_uuid - ) - cursor.execute(cur_delete_query) - conn.commit() - - -# Functions feeding cor_acquisition_framework tables -def insert_cor_af_voletsinp(cur_af_uuid, cur_volet_sinp): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_acquisition_framework_voletsinp (id_acquisition_framework,id_nomenclature_voletsinp)" - + "VALUES ((SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='" - + cur_af_uuid - + "'), " - + "(SELECT id_nomenclature FROM ref_nomenclatures.t_nomenclatures WHERE id_type='113' AND cd_nomenclature='" - + cur_volet_sinp - + "'))" - ) - cursor.execute(cur_insert_query) - conn.commit() - - -def insert_cor_af_objectifs(cur_af_uuid, cur_objectif): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_acquisition_framework_objectif (id_acquisition_framework,id_nomenclature_objectif)" - + "VALUES ((SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='" - + cur_af_uuid - + "'), " - + "(SELECT id_nomenclature FROM ref_nomenclatures.t_nomenclatures WHERE id_type='108' AND cd_nomenclature='" - + cur_objectif - + "'))" - ) - cursor.execute(cur_insert_query) - conn.commit() - - -def insert_cor_af_territory(cur_af_uuid, cur_territory): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_acquisition_framework_territory (id_acquisition_framework,id_nomenclature_territory)" - + "VALUES ((SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='" - + cur_af_uuid - + "'), " - + "ref_nomenclatures.get_id_nomenclature('TERRITOIRE', '" - + cur_territory - + "'))" - ) - cursor.execute(cur_insert_query) - conn.commit() - - -def insert_cor_af_publications(cur_af_uuid, af_publications): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_acquisition_framework_publication (id_acquisition_framework,id_publication)" - + "VALUES ((SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='" - + cur_af_uuid - + "'), " - + "(SELECT id_publication FROM gn_meta.sinp_datatype_publications WHERE publication_reference=" - + af_publications - + "))" - ) - cursor.execute(cur_insert_query) - conn.commit() - - -def insert_cor_af_actor_organism(cur_af_uuid, cur_organism_uuid, cur_actor_role): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_acquisition_framework_actor (id_acquisition_framework,id_organism,id_nomenclature_actor_role)" - + "VALUES ((SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='" - + cur_af_uuid - + "'), " - + "(SELECT id_organisme FROM utilisateurs.bib_organismes WHERE uuid_organisme=" - + str.lower(cur_organism_uuid) - + "), " - + "ref_nomenclatures.get_id_nomenclature('ROLE_ACTEUR', " - + cur_actor_role - + "))" - ) - try: - cursor.execute(cur_insert_query) - except: - pass - conn.commit() - - -def insert_cor_af_actor_person(cur_af_uuid, cur_person_name, cur_actor_role): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_acquisition_framework_actor (id_acquisition_framework,id_role,id_nomenclature_actor_role)" - + "VALUES ((SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='" - + cur_af_uuid - + "'), " - + "(SELECT id_role FROM utilisateurs.t_roles WHERE nom_role||(CASE WHEN prenom_role='' THEN '' ELSE ' '||prenom_role END)='" - + cur_person_name - + "'), " - + "ref_nomenclatures.get_id_nomenclature('ROLE_ACTEUR', " - + cur_actor_role - + "))" - ) - try: - cursor.execute(cur_insert_query) - except: - pass - conn.commit() - - -""" - Datasets -""" - - -def get_known_ds(): - cursor.execute("SELECT DISTINCT unique_dataset_id FROM gn_meta.t_datasets") - results = str(cursor.fetchall()) - known = ( - results.replace("(", "") - .replace(")", "") - .replace("[", "") - .replace("]", "") - .replace(",", "") - .replace("'", "") - .split(" ") - ) - return known - - -def insert_update_t_datasets(CURRENT_DS_ROOT, action, cur_ds_uuid, id_ca): - # id_acquisition_framework = '(SELECT id_acquisition_frameget_tuple_datawork FROM gn_meta.t_acquisition_frameworks WHERE unique_acquisition_framework_id='\ - # + str.lower(get_single_data(CURRENT_DS_ROOT, af_main(ds_main,'identifiantCadre'))+')' - id_acquisition_framework = id_ca - dataset_name = get_single_data(CURRENT_DS_ROOT, ds_main, "libelle") - dataset_shortname = get_single_data(CURRENT_DS_ROOT, ds_main, "libelleCourt") - if get_single_data(CURRENT_DS_ROOT, ds_main, "description") != "": - dataset_desc = get_single_data(CURRENT_DS_ROOT, ds_main, "description") - else: - dataset_desc = "''" - keywords = ( - "'" - + str(get_tuple_data(CURRENT_DS_ROOT, ds_main, "motCle")) - .replace("'", "") - .replace("[", "") - .replace("]", "") - .replace('"', "") - + "'" - ) - marine_domain = get_single_data(CURRENT_DS_ROOT, ds_main, "domaineMarin") - terrestrial_domain = get_single_data(CURRENT_DS_ROOT, ds_main, "domaineTerrestre") - validable = "true" - bbox_west = get_single_data(CURRENT_DS_ROOT, ds_bbox, "borneOuest") - bbox_east = get_single_data(CURRENT_DS_ROOT, ds_bbox, "borneEst") - bbox_south = get_single_data(CURRENT_DS_ROOT, ds_bbox, "borneSud") - bbox_north = get_single_data(CURRENT_DS_ROOT, ds_bbox, "borneNord") - # Default value (information not found in xml files) - id_nomenclature_collecting_method = ( - "ref_nomenclatures.get_id_nomenclature('METHO_RECUEIL', '12')" - ) - # Default value = Données élémentaires d'échanges (information not found in xml files) 'Ne sait pas' - id_nomenclature_data_origin = "ref_nomenclatures.get_id_nomenclature('DS_PUBLIQUE', 'NSP')" - # Default value (information not found in xml files) 'Ne sait pas' - id_nomenclature_source_status = "ref_nomenclatures.get_id_nomenclature('STATUT_SOURCE', 'NSP')" - # Default value (information not found in xml files) 'Dataset' - id_nomenclature_resource_type = "ref_nomenclatures.get_id_nomenclature('RESOURCE_TYP', '1')" - # Default value (information not found in xml files) 'Occurrence de taxon' - id_nomenclature_data_type = "ref_nomenclatures.get_id_nomenclature('DATA_TYP', '1')" - # Default value - active = "false" - # dateCreationMtd - if get_single_data(CURRENT_DS_ROOT, ds_main, "dateCreation") == "''": - meta_create_date = "NULL" - else: - meta_create_date = ( - get_single_data(CURRENT_DS_ROOT, ds_main, "dateCreation") - + "::timestamp without time zone" - ) - # dateMiseAJourMtd - if get_single_data(CURRENT_DS_ROOT, ds_main, "dateRevision") == "''": - meta_update_date = "NULL" - else: - meta_update_date = ( - get_single_data(CURRENT_DS_ROOT, ds_main, "dateRevision") - + "::timestamp without time zone" - ) - # If several objectives, set default value 'Autre' - if len(get_tuple_data(CURRENT_DS_ROOT, ds_main, "objectifJdd")) == 1: - id_nomenclature_dataset_objectif = ( - "ref_nomenclatures.get_id_nomenclature('JDD_OBJECTIFS', " - + get_single_data(CURRENT_DS_ROOT, ds_main, "objectifJdd") - + ")" - ) - else: - id_nomenclature_dataset_objectif = ( - "ref_nomenclatures.get_id_nomenclature('JDD_OBJECTIFS', '7.1')" - ) - # If action is CREATE - if action == "create": - cur_query = ( - "INSERT INTO gn_meta.t_datasets(unique_dataset_id, id_acquisition_framework, dataset_name, dataset_shortname, " - + "dataset_desc, id_nomenclature_data_type, keywords, marine_domain, terrestrial_domain, id_nomenclature_dataset_objectif, " - + "bbox_west, bbox_east, bbox_south, bbox_north, id_nomenclature_collecting_method, id_nomenclature_data_origin, id_nomenclature_source_status, " - + "id_nomenclature_resource_type, validable, active, meta_create_date, meta_update_date)" - + "VALUES (" - + unique_dataset_id - + ", " - + id_acquisition_framework - + ", " - + dataset_name - + ", " - + dataset_shortname - + "," - + dataset_desc - + ", " - + id_nomenclature_data_type - + ", " - + keywords - + ", " - + marine_domain - + "," - + terrestrial_domain - + ", " - + id_nomenclature_dataset_objectif - + ", " - + bbox_west - + ", " - + bbox_east - + ", " - + bbox_south - + ", " - + bbox_north - + ", " - + id_nomenclature_collecting_method - + ", " - + id_nomenclature_data_origin - + ", " - + id_nomenclature_source_status - + ", " - + id_nomenclature_resource_type - + ", " - + validable - + ", " - + active - + "," - + meta_create_date - + ", " - + meta_update_date - + " );" - ) - result = "New dataset created..." - elif action == "update": - cur_query = """ - UPDATE gn_meta.t_datasets - SET - id_acquisition_framework={id_acquisition_framework}, - dataset_name={dataset_name}, - dataset_shortname={dataset_shortname}, - dataset_desc={dataset_desc}, - id_nomenclature_data_type={id_nomenclature_data_type}, - keywords={keywords}, - marine_domain={marine_domain}, - terrestrial_domain={terrestrial_domain}, - id_nomenclature_dataset_objectif={id_nomenclature_dataset_objectif}, - bbox_west={bbox_west}, - bbox_east={bbox_east}, - bbox_south={bbox_south}, - bbox_north={bbox_north}, - id_nomenclature_collecting_method={id_nomenclature_collecting_method}, - id_nomenclature_data_origin={id_nomenclature_data_origin}, - id_nomenclature_source_status={id_nomenclature_source_status}, - id_nomenclature_resource_type={id_nomenclature_resource_type}, - validable={validable}, - active={active}, - meta_create_date={meta_create_date}, - meta_update_date={meta_update_date} - WHERE unique_dataset_id='{cur_ds_uuid}' - """.format( - id_acquisition_framework=id_acquisition_framework, - dataset_name=dataset_name, - dataset_shortname=dataset_shortname, - dataset_desc=dataset_desc, - id_nomenclature_data_type=id_nomenclature_data_type, - keywords=keywords, - marine_domain=marine_domain, - terrestrial_domain=terrestrial_domain, - id_nomenclature_dataset_objectif=id_nomenclature_dataset_objectif, - bbox_west=bbox_west, - bbox_east=bbox_east, - bbox_south=bbox_south, - bbox_north=bbox_north, - id_nomenclature_collecting_method=id_nomenclature_collecting_method, - id_nomenclature_data_origin=id_nomenclature_data_origin, - id_nomenclature_source_status=id_nomenclature_source_status, - id_nomenclature_resource_type=id_nomenclature_resource_type, - validable=validable, - active=active, - meta_create_date=meta_create_date, - meta_update_date=meta_update_date, - cur_ds_uuid=cur_ds_uuid, - ) - result = "Existing dataset updated..." - cursor.execute(cur_query) - conn.commit() - - -# Functions deleting existing cor before create or update cor tables -def delete_cor_ds(table, cur_ds_uuid): - # tables : [territory, protocol, actor] - cur_delete_query = ( - "DELETE FROM gn_meta.cor_dataset_" - + table - + " WHERE id_dataset=(SELECT id_dataset FROM gn_meta.t_datasets WHERE unique_dataset_id='" - + cur_ds_uuid - + "')" - ) - cursor.execute(cur_delete_query) - conn.commit() - - -def insert_cor_ds_territory(cur_ds_uuid, cur_territory): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_dataset_territory (id_dataset,id_nomenclature_territory)" - + "VALUES ((SELECT id_dataset FROM gn_meta.t_datasets WHERE unique_dataset_id='" - + cur_ds_uuid - + "'), " - + "ref_nomenclatures.get_id_nomenclature('TERRITOIRE', '" - + cur_territory - + "'))" - ) - cursor.execute(cur_insert_query) - conn.commit() - - -def insert_cor_ds_protocol(cur_ds_uuid, cur_protocol): - query = f""" - INSERT INTO gn_meta.cor_dataset_protocol(id_dataset, id_protocol) - VALUES ( - (SELECT id_dataset FROM gn_meta.t_datasets WHERE unique_dataset_id='{cur_ds_uuid}' LIMIT 1), - (SELECT id_protocol FROM gn_meta.sinp_datatype_protocols WHERE protocol_name={cur_protocol_name} LIMIT 1) - ) - """ - try: - cursor.execute(query) - conn.commit() - except Exception as e: - conn.rollback() - print(e) - - -def insert_cor_ds_actor_organism(cur_ds_uuid, cur_organism_uuid, cur_actor_role): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_dataset_actor (id_dataset,id_organism,id_nomenclature_actor_role)" - + "VALUES ((SELECT id_dataset FROM gn_meta.t_datasets WHERE unique_dataset_id='" - + cur_ds_uuid - + "'), " - + "(SELECT id_organisme FROM utilisateurs.bib_organismes WHERE uuid_organisme=" - + str.lower(cur_organism_uuid) - + "), " - + "ref_nomenclatures.get_id_nomenclature('ROLE_ACTEUR', " - + cur_actor_role - + "))" - ) - try: - cursor.execute(cur_insert_query) - except: - pass - conn.commit() - - -def insert_cor_ds_actor_person(cur_ds_uuid, cur_person_name, cur_actor_role): - cur_insert_query = ( - "INSERT INTO gn_meta.cor_dataset_actor (id_dataset,id_role,id_nomenclature_actor_role)" - + "VALUES ((SELECT id_dataset FROM gn_meta.t_datasets WHERE unique_dataset_id='" - + cur_ds_uuid - + "'), " - + "(SELECT id_role FROM utilisateurs.t_roles WHERE nom_role||(CASE WHEN prenom_role='' THEN '' ELSE ' '||prenom_role END)='" - + cur_person_name - + "'), " - + "ref_nomenclatures.get_id_nomenclature('ROLE_ACTEUR', " - + cur_actor_role - + "))" - ) - try: - cursor.execute(cur_insert_query) - except: - pass - conn.commit() - - -""" - Getting XML Files & pushing Acquisition Frameworks data in GeoNature DataBase -""" - - -def insert_CA(cur_af_uuid): - """ - insert a CA and return the created ID - """ - if cur_af_uuid[1:-1] in get_known_af(): - action = "update" - else: - action = "create" - # Get and parse corresponding XML File - # remove '' - cur_af_uuid = cur_af_uuid.upper()[1:-1] - af_URL = "https://inpn.mnhn.fr/mtd/cadre/export/xml/GetRecordById?id={}".format(cur_af_uuid) - # print('############## URL CA') - # print(af_URL) - request = requests.get(af_URL) - if request.status_code == 200: - open("{}.xml".format(cur_af_uuid), "wb").write(request.content) - CURRENT_AF_ROOT = ET.parse("{}.xml".format(cur_af_uuid)).getroot() - # Feed t_acquisition_frameworks - af_id = insert_update_t_acquisition_frameworks(CURRENT_AF_ROOT, action, cur_af_uuid) - # Feed cor_acquisition_framework_voletsinp - delete_cor_af("voletsinp", cur_af_uuid) - volets_sinp = get_tuple_data(CURRENT_AF_ROOT, af_main, "voletSINP") - if volets_sinp != "''": - for volet_iter in range(len(volets_sinp)): - cur_volet_sinp = volets_sinp[volet_iter].replace("'", "") - insert_cor_af_voletsinp(cur_af_uuid, cur_volet_sinp) - # Feed cor_acquisition_framework_objectif - delete_cor_af("objectif", cur_af_uuid) - af_objectifs = get_tuple_data(CURRENT_AF_ROOT, af_main, "objectif") - if af_objectifs != "''": - for objectif_iter in range(len(af_objectifs)): - cur_objectif = af_objectifs[objectif_iter].replace("'", "") - insert_cor_af_objectifs(cur_af_uuid, cur_objectif) - # if exists : feed cor_acquisition_framework_territory - cursor.execute( - "select exists(select * from information_schema.tables where table_name='cor_acquisition_framework_territory')" - ) - if cursor.fetchone()[0] == True: - delete_cor_af("territory", cur_af_uuid) - af_territories = get_tuple_data(CURRENT_AF_ROOT, af_main, "territoire") - if af_territories != "''": - for territory_iter in range(len(af_territories)): - cur_territory = af_territories[territory_iter].replace("'", "") - insert_cor_af_territory(cur_af_uuid, cur_territory) - # Create or update publications + Feed cor_acquisition_framework_publication - # Get publication data - delete_cor_af("publication", cur_af_uuid) - af_publications = CURRENT_AF_ROOT.findall( - "gml:featureMember/ca:CadreAcquisition/ca:referenceBiblio/ca:Publication", - xml_namespaces, - ) - for cur_publi in range(len(af_publications)): - cur_publication = get_inner_data(af_publications, cur_publi, "referencePublication") - if get_inner_data(af_publications, cur_publi, "URLPublication") != None: - cur_url = get_inner_data(af_publications, cur_publi, "URLPublication") - else: - cur_url = "''" - # Create or update publication - if "(" + cur_publication.replace("''", "'") + ",)" not in get_known_publications(): - insert_sinp_datatype_publications(cur_publication, cur_url) - else: - update_sinp_datatype_publications(cur_publication, cur_url) - # Feed cor table - insert_cor_af_publications(cur_af_uuid, cur_publication) - # ACTORS - # Create or update actors and feed cor table - delete_cor_af("actor", cur_af_uuid) - # For main actor (single) - cur_actor_role = get_single_data(CURRENT_AF_ROOT, af_main_actor, "roleActeur") - # Person : name is the reference - if get_single_data(CURRENT_AF_ROOT, af_main_actor, "nomPrenom") != "''": - cur_person_name = ( - get_single_data(CURRENT_AF_ROOT, af_main_actor, "nomPrenom") - .replace("\t", "") - .replace("'", "") - .rstrip() - ) - if get_single_data(CURRENT_AF_ROOT, af_main_actor, "mail") != "''": - cur_person_mail = get_single_data(CURRENT_AF_ROOT, af_main_actor, "mail").replace( - "'", "" - ) - else: - cur_person_mail = "" - if "('" + cur_person_name.replace("'", "") + "',)" not in get_known_persons(): - insert_person(cur_person_name, cur_person_mail) - # else : - # update_person(cur_person_name, cur_person_mail) - insert_cor_af_actor_person(cur_af_uuid, cur_person_name, cur_actor_role) - # Organism : the uuid is the reference - if ( - get_single_data(CURRENT_AF_ROOT, af_main_actor, "idOrganisme") != "''" - and get_single_data(CURRENT_AF_ROOT, af_main_actor, "organisme") != "''" - ): - cur_organism_uuid = get_single_data(CURRENT_AF_ROOT, af_main_actor, "idOrganisme") - cur_organism_name = get_single_data(CURRENT_AF_ROOT, af_main_actor, "organisme") - if str.lower(cur_organism_uuid).replace("'", "") not in get_known_organisms(): - insert_organism(cur_organism_uuid, cur_organism_name) - else: - update_organism(cur_organism_uuid, cur_organism_name) - insert_cor_af_actor_organism(cur_af_uuid, cur_organism_uuid, cur_actor_role) - # For others actors - af_othersactors = CURRENT_AF_ROOT.findall( - "gml:featureMember/ca:CadreAcquisition/ca:acteurAutre/ca:ActeurType", xml_namespaces - ) - for other_actor in range(len(af_othersactors)): - cur_actor_role = get_inner_data(af_othersactors, other_actor, "roleActeur") - # Person : name is the reference - if get_inner_data(af_othersactors, other_actor, "nomPrenom") != "''": - cur_person_name = ( - get_inner_data(af_othersactors, other_actor, "nomPrenom") - .replace("\t", "") - .replace("'", "") - .rstrip() - ) - if get_inner_data(af_othersactors, other_actor, "mail") != "''": - cur_person_mail = get_inner_data(af_othersactors, other_actor, "mail") - else: - cur_person_mail = "" - if "('" + cur_person_name.replace("'", "") + "',)" not in get_known_persons(): - insert_person(cur_person_name, cur_person_mail) - # else : - # update_person(cur_person_name, cur_person_mail) - insert_cor_af_actor_person(cur_af_uuid, cur_person_name, cur_actor_role) - # Organism : the uuid is the reference - if ( - get_inner_data(af_othersactors, other_actor, "idOrganisme") != "''" - and get_inner_data(af_othersactors, other_actor, "organisme") != "''" - ): - cur_organism_uuid = get_inner_data(af_othersactors, other_actor, "idOrganisme") - cur_organism_name = get_inner_data(af_othersactors, other_actor, "organisme") - if str.lower(cur_organism_uuid).replace("'", "") not in get_known_organisms(): - insert_organism(cur_organism_uuid, cur_organism_name) - else: - update_organism(cur_organism_uuid, cur_organism_name) - insert_cor_af_actor_organism(cur_af_uuid, cur_organism_uuid, cur_actor_role) - # Delete files if choosen - if DELETE_XML_FILE_AFTER_IMPORT == "True": - os.remove("{}.xml".format(cur_af_uuid)) - return af_id - return None - - -# Parse and import data in GeoNature database - -""" - Getting XML Files & pushing Datasets data in GeoNature DataBase -""" -# Getting uuid list of JDD to import - - -cursor.execute('SELECT DISTINCT "' + CHAMP_ID_JDD + '" FROM ' + TABLE_DONNEES_INPN) -ds_uuid_list = cursor.fetchall() - -for ds_iter in range(len(ds_uuid_list)): - cur_ds_uuid = ds_uuid_list[ds_iter][0] - if cur_ds_uuid not in get_known_ds(): - action = "create" - else: - action = "update" - # Get and parse corresponding XML File - ds_URL = "https://inpn.mnhn.fr/mtd/cadre/jdd/export/xml/GetRecordById?id={}".format( - cur_ds_uuid.upper() - ) - req = requests.get(ds_URL) - if req.status_code == 200: - open("{}.xml".format(cur_ds_uuid), "wb").write(requests.get(ds_URL).content) - CURRENT_DS_ROOT = ET.parse("{}.xml".format(cur_ds_uuid)).getroot() - # insertion des CA - current_af_uuid = get_single_data(CURRENT_DS_ROOT, ds_main, "identifiantCadre") - current_id_ca = insert_CA(current_af_uuid) - # Feed t_datasets - insert_update_t_datasets(CURRENT_DS_ROOT, action, cur_ds_uuid, current_id_ca) - # Feed cor territory - delete_cor_ds("territory", cur_ds_uuid) - ds_territories = get_tuple_data(CURRENT_DS_ROOT, ds_main, "territoire") - if ds_territories != "''": - for territory_iter in range(len(ds_territories)): - cur_territory = ds_territories[territory_iter].replace("'", "") - insert_cor_ds_territory(cur_ds_uuid, cur_territory) - # Feed cor protocol - delete_cor_ds("protocol", cur_ds_uuid) - ds_protocols = CURRENT_DS_ROOT.findall( - "gml:featureMember/jdd:JeuDeDonnees/jdd:protocoles/jdd:ProtocoleType", xml_namespaces - ) - for cur_proto in range(len(ds_protocols)): - if get_inner_data(ds_protocols, cur_proto, "libelleProtocole") != "''": - cur_protocol_name = get_inner_data(ds_protocols, cur_proto, "libelleProtocole") - if get_inner_data(ds_protocols, cur_proto, "descriptionProtocole") != None: - cur_protocol_desc = get_inner_data( - ds_protocols, cur_proto, "descriptionProtocole" - ) - else: - cur_protocol_desc = "''" - if get_inner_data(ds_protocols, cur_proto, "url") != None: - cur_protocol_url = get_inner_data(ds_protocols, cur_proto, "url") - else: - cur_protocol_url = "''" - # Create or update publication - if "('" + cur_protocol_name.replace("''", "'") + "',)" not in get_known_protocols(): - insert_sinp_datatype_protocols( - cur_protocol_name, cur_protocol_desc, cur_protocol_url - ) - else: - update_sinp_datatype_protocols( - cur_protocol_name, cur_protocol_desc, cur_protocol_url - ) - insert_cor_ds_protocol(cur_ds_uuid, cur_protocol_name) - # ACTORS - # Create or update actors and feed cor table - delete_cor_ds("actor", cur_ds_uuid) - # For contact_points - ds_pointscontacts = CURRENT_DS_ROOT.findall( - "gml:featureMember/jdd:JeuDeDonnees/jdd:pointContactJdd/jdd:ActeurType", xml_namespaces - ) - for point_contact in range(len(ds_pointscontacts)): - # Person : name is the reference - cur_actor_role = get_inner_data(ds_pointscontacts, point_contact, "roleActeur") - if get_inner_data(ds_pointscontacts, point_contact, "nomPrenom") != "''": - cur_person_name = ( - get_inner_data(ds_pointscontacts, point_contact, "nomPrenom") - .replace("\t", "") - .replace("'", "") - .rstrip() - ) - if get_inner_data(ds_pointscontacts, point_contact, "mail") != "''": - cur_person_mail = get_inner_data(ds_pointscontacts, point_contact, "mail") - else: - cur_person_mail = "" - if "('" + cur_person_name.replace("'", "") + "',)" not in get_known_persons(): - insert_person(cur_person_name, cur_person_mail) - # else : - # update_person(cur_person_name, cur_person_mail) - insert_cor_ds_actor_person(cur_ds_uuid, cur_person_name, cur_actor_role) - # Organism : the uuid is the reference - if ( - get_inner_data(ds_pointscontacts, point_contact, "idOrganisme") != "''" - and get_inner_data(ds_pointscontacts, point_contact, "organisme") != "''" - ): - cur_organism_uuid = get_inner_data(ds_pointscontacts, point_contact, "idOrganisme") - cur_organism_name = get_inner_data(ds_pointscontacts, point_contact, "organisme") - if str.lower(cur_organism_uuid).replace("'", "") not in get_known_organisms(): - insert_organism(cur_organism_uuid, cur_organism_name) - else: - update_organism(cur_organism_uuid, cur_organism_name) - insert_cor_ds_actor_organism(cur_ds_uuid, cur_organism_uuid, cur_actor_role) - # For PF_contact (single) - cur_actor_role = get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "roleActeur") - # Person : name is the reference - if get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "nomPrenom") != "''": - cur_person_name = ( - get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "nomPrenom") - .replace("\t", "") - .replace("'", "") - .rstrip() - ) - if get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "mail") != "''": - cur_person_mail = get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "mail").replace( - "'", "" - ) - else: - cur_person_mail = "" - if "('" + cur_person_name.replace("'", "") + "',)" not in get_known_persons(): - insert_person(cur_person_name, cur_person_mail) - # else : - # update_person(cur_person_name, cur_person_mail) - insert_cor_ds_actor_person(cur_ds_uuid, cur_person_name, cur_actor_role) - # Organism : the uuid is the reference - if ( - get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "idOrganisme") != "''" - and get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "organisme") != "''" - ): - cur_organism_uuid = get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "idOrganisme") - cur_organism_name = get_single_data(CURRENT_DS_ROOT, ds_contact_pf, "organisme") - if str.lower(cur_organism_uuid).replace("'", "") not in get_known_organisms(): - insert_organism(cur_organism_uuid, cur_organism_name) - else: - update_organism(cur_organism_uuid, cur_organism_name) - insert_cor_ds_actor_organism(cur_ds_uuid, cur_organism_uuid, cur_actor_role) - # Delete files if choosen - if DELETE_XML_FILE_AFTER_IMPORT == "True": - os.remove("{}.xml".format(cur_ds_uuid)) - else: - print(f"{cur_ds_uuid} not found") diff --git a/data/scripts/import_ginco/import_mtd.sh b/data/scripts/import_ginco/import_mtd.sh deleted file mode 100755 index a1b3bb7a5c..0000000000 --- a/data/scripts/import_ginco/import_mtd.sh +++ /dev/null @@ -1,22 +0,0 @@ -#!/usr/bin/env bash - -. settings.ini -# export all variable in settings.ini -# -> they are available in python os.environ -export $(grep -i --regexp ^[a-z] settings.ini | cut -d= -f1) -export TABLE_DONNEES_INPN CHAMP_ID_CA CHAMP_ID_JDD DELETE_XML_FILE_AFTER_IMPORT - -sudo apt-get install virtualenv - -if [ -d 'venv/' ] -then - echo "Suppression du virtual env existant..." - sudo rm -rf venv -fi - -virtualenv -p /usr/bin/python3 venv -source venv/bin/activate -pip install psycopg2 requests - - -python3 import_mtd.py diff --git a/data/scripts/import_ginco/import_taxref/create_structure.sql b/data/scripts/import_ginco/import_taxref/create_structure.sql deleted file mode 100644 index cf7713f528..0000000000 --- a/data/scripts/import_ginco/import_taxref/create_structure.sql +++ /dev/null @@ -1,47 +0,0 @@ -ALTER TABLE taxonomie.import_taxref RENAME TO import_taxref_v11; - -DROP TABLE IF EXISTS taxonomie.import_taxref; - -CREATE TABLE taxonomie.import_taxref -( - regne character varying(20), - phylum character varying(50), - classe character varying(50), - ordre character varying(50), - famille character varying(50), - SOUS_FAMILLE character varying(50), - TRIBU character varying(50), - group1_inpn character varying(50), - group2_inpn character varying(50), - cd_nom integer NOT NULL, - cd_taxsup integer, - cd_sup integer, - cd_ref integer, - rang character varying(10), - lb_nom character varying(100), - lb_auteur character varying(250), - nom_complet character varying(255), - nom_complet_html character varying(255), - nom_valide character varying(255), - nom_vern text, - nom_vern_eng character varying(500), - habitat character varying(10), - fr character varying(10), - gf character varying(10), - mar character varying(10), - gua character varying(10), - sm character varying(10), - sb character varying(10), - spm character varying(10), - may character varying(10), - epa character varying(10), - reu character varying(10), - SA character varying(10), - TA character varying(10), - taaf character varying(10), - pf character varying(10), - nc character varying(10), - wf character varying(10), - cli character varying(10), - url text -); diff --git a/data/scripts/import_ginco/import_taxref/import_new_taxref_version.sh b/data/scripts/import_ginco/import_taxref/import_new_taxref_version.sh deleted file mode 100755 index 345dce8649..0000000000 --- a/data/scripts/import_ginco/import_taxref/import_new_taxref_version.sh +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/env bash -. ../settings.ini - -mkdir -p /tmp/taxhub -sudo chown -R "$(id -u)" /tmp/taxhub - - -LOG_DIR="../log/" - -mkdir -p $LOG_DIR - -echo "Import des données de taxref v12" - -echo "Import des données de taxref v12" > $LOG_DIR/update_taxref_v12.log -array=( TAXREF_INPN_v12.zip ESPECES_REGLEMENTEES_v11.zip ) -for i in "${array[@]}" -do - if [ ! -f '/tmp/taxhub/'$i ] - then - wget http://geonature.fr/data/inpn/taxonomie/$i -P /tmp/taxhub - else - echo $i exists - fi - unzip -o /tmp/taxhub/$i -d /tmp/taxhub &>> $LOG_DIR/update_taxref_v12.log -done - - -echo "Import taxref v12" -# sudo -n -u postgres -s psql -d $geonature_db_name -c "DROP TABLE taxonomie.import_taxref" -export PGPASSWORD=$geonature_user_pg_pass;psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f create_structure.sql &>> $LOG_DIR/update_taxref_v12.log -sudo -n -u postgres -s psql -d $geonature_db_name -f import_taxref_data.sql -export PGPASSWORD=$geonature_user_pg_pass;psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f migrate_taxref_data.sql &>> $LOG_DIR/update_taxref_v12.log diff --git a/data/scripts/import_ginco/import_taxref/import_taxref_data.sql b/data/scripts/import_ginco/import_taxref/import_taxref_data.sql deleted file mode 100644 index c516553dba..0000000000 --- a/data/scripts/import_ginco/import_taxref/import_taxref_data.sql +++ /dev/null @@ -1,5 +0,0 @@ -COPY taxonomie.import_taxref FROM '/tmp/taxhub/TAXREFv12.txt' -WITH CSV HEADER -DELIMITER E'\t' encoding 'UTF-8'; - - diff --git a/data/scripts/import_ginco/import_taxref/migrate_taxref_data.sql b/data/scripts/import_ginco/import_taxref/migrate_taxref_data.sql deleted file mode 100644 index d19a2a029f..0000000000 --- a/data/scripts/import_ginco/import_taxref/migrate_taxref_data.sql +++ /dev/null @@ -1,71 +0,0 @@ - ------------------------------------------------- ------------------------------------------------- ---Alter existing constraints ------------------------------------------------- ------------------------------------------------- - -ALTER TABLE taxonomie.bib_noms DROP CONSTRAINT fk_bib_nom_taxref; -ALTER TABLE taxonomie.taxref_protection_especes DROP CONSTRAINT taxref_protection_especes_cd_nom_fkey; - -ALTER TABLE taxonomie.t_medias DROP CONSTRAINT check_cd_ref_is_ref; -ALTER TABLE taxonomie.bib_noms DROP CONSTRAINT check_is_valid_cd_ref; -ALTER TABLE taxonomie.cor_taxon_attribut DROP CONSTRAINT check_is_cd_ref; - - -UPDATE taxonomie.taxref t - SET id_statut = fr, id_habitat = it.habitat::int, id_rang = it.rang, regne = it.regne, phylum = it.phylum, - classe = it.classe, ordre = it.ordre, famille = it.famille, cd_taxsup = it.cd_taxsup, - cd_sup = it.cd_sup, cd_ref = it.cd_ref, - lb_nom = it.lb_nom, lb_auteur = it.lb_auteur, nom_complet = it.nom_complet, - nom_complet_html = it.nom_complet_html, nom_valide = it.nom_valide, - nom_vern = it.nom_vern, nom_vern_eng = it.nom_vern_eng, group1_inpn = it.group1_inpn, - group2_inpn = it.group2_inpn, sous_famille = it.sous_famille, - tribu = it.tribu, url = it.url -FROM taxonomie.import_taxref it -WHERE it.cd_nom = t.cd_nom; - -INSERT INTO taxonomie.taxref( - cd_nom, id_statut, id_habitat, id_rang, regne, phylum, classe, - ordre, famille, cd_taxsup, cd_sup, cd_ref, lb_nom, lb_auteur, - nom_complet, nom_complet_html, nom_valide, nom_vern, nom_vern_eng, - group1_inpn, group2_inpn, sous_famille, tribu, url) -SELECT it.cd_nom, it.fr, it.habitat::int, it.rang, it.regne, it.phylum, it.classe, - it.ordre, it.famille, it.cd_taxsup, it.cd_sup, it.cd_ref, it.lb_nom, it.lb_auteur, - it.nom_complet, it.nom_complet_html, it.nom_valide, it.nom_vern, it.nom_vern_eng, - it.group1_inpn, it.group2_inpn, it.sous_famille, it.tribu, it.url -FROM taxonomie.import_taxref it -LEFT OUTER JOIN taxonomie.taxref t -ON it.cd_nom = t.cd_nom -WHERE t.cd_nom IS NULL; - --- DELETE MISSING CD_NOM -DELETE FROM taxonomie.taxref -WHERE cd_nom IN ( - SELECT t.cd_nom - FROM taxonomie.taxref t - LEFT OUTER JOIN taxonomie.import_taxref it - ON it.cd_nom = t.cd_nom - WHERE it.cd_nom IS NULL -); - - ------------------------------------------------- ------------------------------------------------- --- REBUILD CONSTAINTS ------------------------------------------------- ------------------------------------------------- - -ALTER TABLE taxonomie.bib_noms - ADD CONSTRAINT fk_bib_nom_taxref FOREIGN KEY (cd_nom) - REFERENCES taxonomie.taxref (cd_nom) MATCH SIMPLE - ON UPDATE NO ACTION ON DELETE NO ACTION; - -ALTER TABLE taxonomie.t_medias - ADD CONSTRAINT check_is_cd_ref CHECK (cd_ref = taxonomie.find_cdref(cd_ref)); - -ALTER TABLE taxonomie.bib_noms - ADD CONSTRAINT check_is_cd_ref CHECK (cd_ref = taxonomie.find_cdref(cd_ref)); - -ALTER TABLE taxonomie.cor_taxon_attribut - ADD CONSTRAINT check_is_cd_ref CHECK (cd_ref = taxonomie.find_cdref(cd_ref)); diff --git a/data/scripts/import_ginco/insert_data.sh b/data/scripts/import_ginco/insert_data.sh deleted file mode 100755 index 042e9e0a05..0000000000 --- a/data/scripts/import_ginco/insert_data.sh +++ /dev/null @@ -1,43 +0,0 @@ -. settings.ini - -function write_log() { - echo $1 - echo "" &>> log/insert_data.log - echo "" &>> log/insert_data.log - echo "--------------------" &>> log/insert_data.log - echo $1 &>> log/insert_data.log - echo "--------------------" &>> log/insert_data.log -} -export PGPASSWORD=$geonature_user_pg_pass; -# fonctions utilitaires pour modifier des champs qui ont des dépendances -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f utils_drop_dependencies.sql &> log/insert_data.log - -write_log "SCHEMA UTILISATEURS" -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f utilisateurs.sql &>> log/insert_data.log - -write_log "INSERTION GN_META" - -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f meta.sql &>> log/insert_data.log - - -write_log "UPDATE REF_GEO" -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -v CODE_INSEE_REG=$code_insee_reg -f ref_geo.sql &>> log/insert_data.log - -write_log "INSERT IN SYNTHESE...cela peut être long" -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f synthese_before_insert.sql &>> log/insert_data.log - -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -v GINCO_TABLE=$ginco_data_table_name -v GINCO_TABLE_QUOTED="'$ginco_data_table_name'" -f synthese.sql &>> log/insert_data.log -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -v GINCO_TABLE=$ginco_data_table_name -v GINCO_TABLE_QUOTED="'$ginco_data_table_name'" -f synthese_without_geom.sql &>> log/insert_data.log - - -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f synthese_after_insert.sql &>> log/insert_data.log - -write_log "Occtax" -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f occtax.sql &>> log/insert_data.log - -echo "OK" - -write_log "PERMISSIONS" -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -f permissions.sql &>> log/insert_data.log - -echo "Terminé" \ No newline at end of file diff --git a/data/scripts/import_ginco/log/.gitignore b/data/scripts/import_ginco/log/.gitignore deleted file mode 100644 index 5fb03d009d..0000000000 --- a/data/scripts/import_ginco/log/.gitignore +++ /dev/null @@ -1 +0,0 @@ -./* \ No newline at end of file diff --git a/data/scripts/import_ginco/meta.sql b/data/scripts/import_ginco/meta.sql deleted file mode 100644 index 7e5e555a47..0000000000 --- a/data/scripts/import_ginco/meta.sql +++ /dev/null @@ -1,105 +0,0 @@ --- creation d'un cadre d'acquisition provisoire pour pouvoir inserer les JDD. On fera le rattachement plus tard grâce au web service MTD - -TRUNCATE TABLE gn_meta.t_acquisition_frameworks CASCADE; - -INSERT INTO gn_meta.t_acquisition_frameworks ( - acquisition_framework_name, - acquisition_framework_desc, - id_nomenclature_territorial_level, - id_nomenclature_financing_type, - acquisition_framework_start_date, - meta_create_date, - meta_update_date - - ) VALUES ( - 'CA provisoire - import Ginco -> GeoNature', - ' - ', - ref_nomenclatures.get_id_nomenclature('NIVEAU_TERRITORIAL', '4'), - ref_nomenclatures.get_id_nomenclature('TYPE_FINANCEMENT', '1'), - '2019-11-17', - NOW(), - NOW() - ) -; - -WITH jdd_uuid AS ( - SELECT - value_string as uuid, - jdd_id - FROM ginco_migration.jdd_field field - WHERE field.key = 'metadataId' -), -jdd_name AS ( - SELECT - value_string as jdd_name, - jdd_id - FROM ginco_migration.jdd_field field - WHERE field.key = 'title' -) -INSERT INTO gn_meta.t_datasets ( - unique_dataset_id, - id_acquisition_framework, - dataset_name, - dataset_shortname, - dataset_desc, - marine_domain, - terrestrial_domain, - active, - validable, - meta_create_date, - id_nomenclature_data_type, - id_nomenclature_dataset_objectif, - id_nomenclature_collecting_method, - id_nomenclature_data_origin, - id_nomenclature_source_status, - id_nomenclature_resource_type - ) - SELECT - jdd_uuid.uuid::uuid, - (SELECT id_acquisition_framework FROM gn_meta.t_acquisition_frameworks WHERE acquisition_framework_name = 'CA provisoire - import Ginco -> GeoNature' LIMIT 1), - jdd_name.jdd_name, - 'A compléter', - 'A compléter', - false, - true, - true, - true, - '2019-11-17', - ref_nomenclatures.get_id_nomenclature('DATA_TYP', '2'), - ref_nomenclatures.get_id_nomenclature('JDD_OBJECTIFS', '7.2'), - ref_nomenclatures.get_id_nomenclature('METHO_RECUEIL', '12'), - ref_nomenclatures.get_id_nomenclature('DS_PUBLIQUE', 'NSP'), - ref_nomenclatures.get_id_nomenclature('STATUT_SOURCE', 'NSP'), - ref_nomenclatures.get_id_nomenclature('RESOURCE_TYP', '1') - FROM ginco_migration.jdd jdd - JOIN jdd_uuid ON jdd_uuid.jdd_id = jdd.id - JOIN jdd_name ON jdd_name.jdd_id = jdd.id - where status != 'deleted' -; - --- set submission date -update gn_meta.t_acquisition_frameworks as af -set initial_closing_date = subquery.date_max -from ( -with jdd_uuid as ( -select j.id, jf.value_string as _uuid -from ginco_migration.jdd j -join ginco_migration.jdd_field jf on jf.jdd_id = j.id -where jf.key = 'metadataId' -) -select max(TO_TIMESTAMP(value_string, 'YYYY-MM-DD_HH24-MI-SS')) as date_max, taf.id_acquisition_framework -from ginco_migration.jdd j -join ginco_migration.jdd_field jf on jf.jdd_id = j.id -join jdd_uuid u on u.id = j.id -join gn_meta.t_datasets td on u._uuid::uuid = td.unique_dataset_id -join gn_meta.t_acquisition_frameworks taf on taf.id_acquisition_framework = td.id_acquisition_framework -where jf."key" = 'publishedAt' -group by taf.id_acquisition_framework -) as subquery -where af.id_acquisition_framework = subquery.id_acquisition_framework and af.initial_closing_date is NULL - - - - - -SELECT pg_catalog.setval('gn_meta.t_datasets_id_dataset_seq', (SELECT max(id_dataset)+1 FROM gn_meta.t_datasets), true); diff --git a/data/scripts/import_ginco/occtax.sql b/data/scripts/import_ginco/occtax.sql deleted file mode 100644 index 65293cca8d..0000000000 --- a/data/scripts/import_ginco/occtax.sql +++ /dev/null @@ -1,59 +0,0 @@ - DELETE FROM taxonomie.cor_nom_liste; - DELETE FROM taxonomie.bib_noms; - ALTER TABLE taxonomie.cor_nom_liste DISABLE TRIGGER trg_refresh_mv_taxref_list_forautocomplete; - - INSERT INTO taxonomie.bib_noms(cd_nom,cd_ref,nom_francais) - SELECT cd_nom, cd_ref, nom_vern - FROM taxonomie.taxref - WHERE id_rang NOT IN ('Dumm','SPRG','KD','SSRG','IFRG','PH','SBPH','IFPH','DV','SBDV','SPCL','CLAD','CL', - 'SBCL','IFCL','LEG','SPOR','COH','OR','SBOR','IFOR','SPFM','FM','SBFM','TR','SSTR'); - - INSERT INTO taxonomie.cor_nom_liste (id_liste,id_nom) - SELECT 100,n.id_nom FROM taxonomie.bib_noms n; - --- INSERT INTO taxonomie.vm_taxref_list_forautocomplete --- SELECT t.cd_nom, --- t.cd_ref, --- t.search_name, --- t.nom_valide, --- t.lb_nom, --- t.regne, --- t.group2_inpn, --- l.id_liste --- FROM ( --- SELECT t_1.cd_nom, --- t_1.cd_ref, --- concat(t_1.lb_nom, ' = ', t_1.nom_valide, '', ' - [', t_1.id_rang, ' - ', t_1.cd_nom , ']') AS search_name, --- t_1.nom_valide, --- t_1.lb_nom, --- t_1.regne, --- t_1.group2_inpn --- FROM taxonomie.taxref t_1 --- UNION --- SELECT t_1.cd_nom, --- t_1.cd_ref, --- concat(n.nom_francais, ' = ', t_1.nom_valide, '', ' - [', t_1.id_rang, ' - ', t_1.cd_nom , ']' ) AS search_name, --- t_1.nom_valide, --- t_1.lb_nom, --- t_1.regne, --- t_1.group2_inpn --- FROM taxonomie.taxref t_1 --- JOIN taxonomie.bib_noms n --- ON t_1.cd_nom = n.cd_nom --- WHERE n.nom_francais IS NOT NULL AND t_1.cd_nom = t_1.cd_ref --- ) t --- JOIN taxonomie.v_taxref_all_listes l ON t.cd_nom = l.cd_nom; --- COMMENT ON TABLE vm_taxref_list_forautocomplete --- IS 'Table construite à partir d''une requete sur la base et mise à jour via le trigger trg_refresh_mv_taxref_list_forautocomplete de la table cor_nom_liste'; - - - ALTER TABLE taxonomie.cor_nom_liste ENABLE TRIGGER trg_refresh_mv_taxref_list_forautocomplete; - -CREATE INDEX i_vm_taxref_list_forautocomplete_cd_nom - ON taxonomie.vm_taxref_list_forautocomplete (cd_nom ASC NULLS LAST); -CREATE INDEX i_vm_taxref_list_forautocomplete_search_name - ON taxonomie.vm_taxref_list_forautocomplete (search_name ASC NULLS LAST); -CREATE INDEX i_tri_vm_taxref_list_forautocomplete_search_name - ON taxonomie.vm_taxref_list_forautocomplete - USING gist - (search_name gist_trgm_ops); diff --git a/data/scripts/import_ginco/permissions.sql b/data/scripts/import_ginco/permissions.sql deleted file mode 100644 index f018f4ddc3..0000000000 --- a/data/scripts/import_ginco/permissions.sql +++ /dev/null @@ -1,66 +0,0 @@ -INSERT INTO gn_permissions.cor_role_action_filter_module_object - ( - id_role, - id_action, - id_filter, - id_module, - id_object - ) -VALUES - -- Groupe Admin sur tout geonature - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 1, 4, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 2, 4, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 3, 4, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 4, 4, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 5, 4, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 6, 4, 0, 1), - --CRUVED du groupe 'producteur' sur tout GeoNature - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 1, 4, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 2, 3, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 3, 2, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 4, 1, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 5, 3, 0, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 6, 2, 0, 1), - -- Groupe admin a tous les droit dans METADATA - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 1, 4, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 2, 4, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 3, 4, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 4, 4, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 5, 4, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 6, 4, 2, 1), - -- Groupe producteur acces limité a dans METADATA - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 1, 1, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 2, 3, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 3, 1, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 4, 1, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 5, 3, 2, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 6, 1, 2, 1), - -- Groupe en producteur, n'a pas accès à l'admin - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 1, 1, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 2, 1, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 3, 1, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 4, 1, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 5, 1, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur'), 6, 1, 1, 1), - -- Groupe en admin a tous les droits sur l'admin - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 1, 4, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 2, 4, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 3, 4, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 4, 4, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 5, 4, 1, 1), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 6, 4, 1, 1), - -- Groupe ADMIN peut gérer les permissions depuis le backoffice - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 1, 4, 1, 2), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 2, 4, 1, 2), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 3, 4, 1, 2), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 4, 4, 1, 2), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 5, 4, 1, 2), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 6, 4, 1, 2), - -- Groupe ADMIN peut gérer les nomenclatures depuis le backoffice - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 1, 4, 1, 3), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 2, 4, 1, 3), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 3, 4, 1, 3), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 4, 4, 1, 3), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 5, 4, 1, 3), - ((SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS TRUE), 6, 4, 1, 3) -; diff --git a/data/scripts/import_ginco/ref_geo.sql b/data/scripts/import_ginco/ref_geo.sql deleted file mode 100644 index 19ae3ee891..0000000000 --- a/data/scripts/import_ginco/ref_geo.sql +++ /dev/null @@ -1,6 +0,0 @@ -UPDATE ref_geo.l_areas -SET enable = false -WHERE NOT geom && (select st_extent(l.geom) -from ref_geo.l_areas l -join ref_geo.li_municipalities li ON l.id_area = li.id_area -where insee_reg = :CODE_INSEE_REG::character varying); \ No newline at end of file diff --git a/data/scripts/import_ginco/restore_db.sh b/data/scripts/import_ginco/restore_db.sh deleted file mode 100755 index c17b7ee3a0..0000000000 --- a/data/scripts/import_ginco/restore_db.sh +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env bash - -# Scripts qui restaure une BDD GINCO à parit d'un DUMP SQL -# Puis crée un Foreign Data Wrapper entre la base restaurée et la base GeoNature cible -# remplir le fichier settings.ini en amont - -. settings.ini -if [ ! -d 'log' ] -then - mkdir log -fi - -function write_log() { - echo $1 - echo "" &>> log/restore_ginco_db.log - echo "" &>> log/restore_ginco_db.log - echo "--------------------" &>> log/restore_ginco_db.log - echo $1 &>> log/restore_ginco_db.log - echo "--------------------" &>> log/restore_ginco_db.log -} - -function database_exists () { - # /!\ Will return false if psql can't list database. Edit your pg_hba.conf - # as appropriate. - if [ -z $1 ] - then - # Argument is null - return 0 - else - # Grep db name in the list of database - sudo -u postgres -s -- psql -tAl | grep -q "^$1|" - return $? - fi -} - -# create user -sudo ls -sudo -n -u postgres -s psql -c "CREATE ROLE admin WITH LOGIN PASSWORD '$ginco_admin_pg_pass';" &> log/restore_ginco_db.log -sudo -n -u postgres -s psql -c "CREATE ROLE ogam WITH LOGIN PASSWORD '$ginco_ogame_pg_pass';" &>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -c "ALTER ROLE admin WITH SUPERUSER;" &>> log/restore_ginco_db.log -#create database - -if database_exists $ginco_db_name -then - if $drop_ginco_db - then - write_log "Drop database..." - sudo -u postgres -s dropdb $ginco_db_name - else - write_log "Database exists but the settings file indicate that we don't have to drop it." - fi -fi -write_log "Create DB" -sudo -n -u postgres -s createdb -O admin $ginco_db_name &>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -d $ginco_db_name -c "CREATE EXTENSION IF NOT EXISTS postgis;" &>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -d $ginco_db_name -c "CREATE EXTENSION IF NOT EXISTS plpgsql WITH SCHEMA pg_catalog; COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language';" &>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -d $ginco_db_name -c 'CREATE EXTENSION IF NOT EXISTS "uuid-ossp";' &>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -d $ginco_db_name -c 'CREATE EXTENSION IF NOT EXISTS unaccent;' &>> log/restore_ginco_db.log - -write_log "Restauration de la DB... ça va être long !" -export PGPASSWORD=$ginco_admin_pg_pass;psql -h $db_host -d $ginco_db_name -U admin -f $sql_dump_path &>> log/restore_ginco_db.log -write_log "Restauration terminée" - - - -write_log "Création du lien entre base: FDW" - -sudo -n -u postgres -s psql -d $geonature_db_name -c "CREATE EXTENSION IF NOT EXISTS postgres_fdw;"&>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -d $geonature_db_name -c "DROP SERVER IF EXISTS gincoserver CASCADE;"&>> log/restore_ginco_db.log -sudo -n -u postgres -s psql -d $geonature_db_name -c "CREATE SERVER gincoserver FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host '$db_host', dbname '$ginco_db_name', port '$db_port');"&>> log/restore_ginco_db.log - -sudo -n -u postgres -s psql -d $geonature_db_name -c "CREATE USER MAPPING FOR geonatadmin SERVER gincoserver OPTIONS (user 'admin', password '$ginco_admin_pg_pass');"&>> log/restore_ginco_db.log - -sudo -n -u postgres -s psql -d $geonature_db_name -c "ALTER SERVER gincoserver OWNER TO $geonature_pg_user;"&>> log/restore_ginco_db.log -export PGPASSWORD=$geonature_user_pg_pass; -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -c " DROP SCHEMA IF EXISTS ginco_migration;" -psql -h $db_host -U $geonature_pg_user -d $geonature_db_name -c " -CREATE SCHEMA ginco_migration; -IMPORT FOREIGN SCHEMA website FROM SERVER gincoserver INTO ginco_migration; -IMPORT FOREIGN SCHEMA raw_data FROM SERVER gincoserver INTO ginco_migration; -" - -write_log "Restauration terminée" - - - diff --git a/data/scripts/import_ginco/settings.ini.sample b/data/scripts/import_ginco/settings.ini.sample deleted file mode 100644 index 2670a13e04..0000000000 --- a/data/scripts/import_ginco/settings.ini.sample +++ /dev/null @@ -1,29 +0,0 @@ - -# Supprimer la base Ginco (cas ou le script est lancé à plusieurs reprises) -drop_ginco_db=false - -# noom de la base de données Ginco créé à partir du dump sql -ginco_db_name=ginco_db -# Les utilisateur 'admin' et 'ogam' sont necessair à la restauration et sont créé par le script. -# Veuillez choisir des mdp pour ces deux utilisateurs -ginco_admin_pg_pass=monpassachanger -ginco_ogame_pg_pass=monpassachanger -# nom du modèle ogam de la table des données à importer -ginco_data_table_name=model_592e825dab701_observation -# code insee de la région des données Ginco (pour ref_geo GeoNature) -code_insee_reg=34 - -sql_dump_path=./path_to_sql_file.sql - -# Information de la base de donnée GeoNature cible -geonature_db_name=geonature2db -geonature_pg_user=geonatadmin -geonature_user_pg_pass=monpassachanger -db_host=localhost -db_port=5432 - - -# Pour le script python de récupération des JDD et CA, ne pas modifier sauf si vous savez ce que vous faites -TABLE_DONNEES_INPN=gn_meta.t_datasets -CHAMP_ID_JDD=unique_dataset_id -DELETE_XML_FILE_AFTER_IMPORT=true \ No newline at end of file diff --git a/data/scripts/import_ginco/synthese.sql b/data/scripts/import_ginco/synthese.sql deleted file mode 100644 index 335adb76cc..0000000000 --- a/data/scripts/import_ginco/synthese.sql +++ /dev/null @@ -1,219 +0,0 @@ -------------------- ---SCHEMA SYNTHESE-- -------------------- --- Creation d'une vue materialisée avec seulement les données avec geom --- et qui n'appartiennent pas à un JDD suppriméé -CREATE OR REPLACE FUNCTION convert_to_integer(v_input text) -RETURNS INTEGER AS $$ -DECLARE v_int_value INTEGER DEFAULT NULL; -BEGIN - BEGIN - v_int_value := v_input::INTEGER; - EXCEPTION WHEN OTHERS THEN - RAISE NOTICE 'Invalid integer value: "%". Returning NULL.', v_input; - RETURN NULL; - END; -RETURN v_int_value; -END; -$$ LANGUAGE plpgsql; - - -DROP MATERIALIZED VIEW IF exists ginco_migration.vm_data_model_source CASCADE; -DROP SEQUENCE IF EXISTS vm_data_model_source; -CREATE SEQUENCE vm_data_model_source CYCLE; -CREATE MATERIALIZED VIEW ginco_migration.vm_data_model_source AS - SELECT nextval('vm_data_model_source'::regclass) AS id, - m.anneerefcommune, - m.typeinfogeomaille, - convert_to_integer(m.cdnom) AS cdnom, - m.jourdatedebut, - m.statutobservation, - m.occnaturalite, - m.typeinfogeocommune, - m.cdref, - m.heuredatefin, - m.dspublique, - m.codemaille, - m.validateurnomorganisme, - m.identifiantpermanent::uuid AS identifiantpermanent, - m.versionrefmaille, - m.observateurnomorganisme, - m.versiontaxref, - m.referencebiblio, - m.typeinfogeodepartement, - m.diffusionniveauprecision, - m.codecommune, - m.denombrementmax::integer, - m.codedepartement, - m.anneerefdepartement, - m.observateuridentite, - m.deefloutage, - m.natureobjetgeo, - m.codeme, - m.orgtransformation, - m.versionme, - m.occetatbiologique, - m.occstatutbiologique, - m.identifiantorigine, - m.dateme, - m.heuredatedebut, - m.denombrementmin::integer, - m.versionen, - m.nomrefmaille, - m.statutsource, - m.occsexe, - m.nomcommune, - m.typeinfogeome, - m.codeen, - m.organismegestionnairedonnee, - m.objetdenombrement, - m.commentaire, - m.obsmethode, - m.typeen, - m.nomcite, - m.typeinfogeoen, - m.jourdatefin, - m.occstadedevie, - m.jddmetadonneedeeid, - m.sensimanuelle, - m.codecommunecalcule, - m.submission_id, - m.sensiversionreferentiel, - m.sensiniveau, - m.codedepartementcalcule, - m.deedatedernieremodification, - m.sensialerte, - m.sensidateattribution, - m.nomcommunecalcule, - m.nomvalide, - m.sensireferentiel, - m.sensible, - m.codemaillecalcule, - m.provider_id, - m.geometrie, - m.cdnomcalcule, - m.cdrefcalcule, - m.taxostatut, - m.taxomodif, - m.taxoalerte, - m.user_login - FROM ginco_migration. m - -- on ne prend que les JDD non supprimé car la table gn_meta.t_datasets ne comprend que les JDD non supprimé - join gn_meta.t_datasets d on d.unique_dataset_id = m.jddmetadonneedeeid::uuid - where m.geometrie is not null - ; - - --- Insertion des données -DELETE FROM gn_synthese.synthese; -DELETE FROM gn_synthese.t_sources -WHERE name_source = 'Ginco'; - --- creation d'une source -INSERT INTO gn_synthese.t_sources -( - name_source, - desc_source, - entity_source_pk_field - ) -VALUES( - 'Ginco', - 'Données source Ginco', - concat('ginco_migration.', :GINCO_TABLE_QUOTED) -); - - -UPDATE gn_synthese.defaults_nomenclatures_value -SET id_nomenclature = ref_nomenclatures.get_id_nomenclature('STATUT_VALID', '6') -WHERE mnemonique_type = 'STATUT_VALID'; - - --- suppresion des contraintes, on tentera de les remettre plus tard... -ALTER TABLE gn_synthese.synthese DROP CONSTRAINT IF EXISTS check_synthese_date_max; -ALTER TABLE gn_synthese.synthese DROP CONSTRAINT IF EXISTS check_synthese_count_max; - -INSERT INTO gn_synthese.synthese ( -unique_id_sinp, -id_source, -entity_source_pk_value, -id_dataset, -id_nomenclature_geo_object_nature, -id_nomenclature_obs_technique, -id_nomenclature_bio_status, -id_nomenclature_bio_condition, -id_nomenclature_naturalness, -id_nomenclature_diffusion_level, -id_nomenclature_life_stage, -id_nomenclature_sex, -id_nomenclature_obj_count, -id_nomenclature_observation_status, -id_nomenclature_blurring, -id_nomenclature_source_status, -id_nomenclature_info_geo_type, -id_nomenclature_sensitivity, -count_min, -count_max, -cd_nom, -nom_cite, -meta_v_taxref, -the_geom_4326, -the_geom_point, -the_geom_local, -date_min, -date_max, -observers, -id_digitiser, -comment_context, -last_action -) -SELECT - m.identifiantpermanent::uuid, - (SELECT id_source FROM gn_synthese.t_sources WHERE name_source = 'Ginco'), - m.identifiantpermanent, - (SELECT id_dataset FROM gn_meta.t_datasets ds where ds.unique_dataset_id = COALESCE(m.jddmetadonneedeeid::uuid, NULL)), - t1.id_nomenclature, - t2.id_nomenclature, - t3.id_nomenclature, - t4.id_nomenclature, - t5.id_nomenclature, - t6.id_nomenclature, - t7.id_nomenclature, - t8.id_nomenclature, - t9.id_nomenclature, - t10.id_nomenclature, - t11.id_nomenclature, - t12.id_nomenclature, - t13.id_nomenclature, - t14.id_nomenclature, - m.denombrementmin, - m.denombrementmax, - tax.cd_nom, - m.nomcite, - substring(m.versiontaxref from 1 for 50), - m.geometrie, - public.st_centroid(m.geometrie), - public.st_transform(m.geometrie, 2154), - concat((to_char(m.jourdatedebut, 'DD/MM/YYYY'), ' ', COALESCE(to_char(m.heuredatedebut, 'HH24:MI:SS'),'00:00:00')))::timestamp, - concat((to_char(m.jourdatefin, 'DD/MM/YYYY'), ' ', COALESCE(to_char(m.heuredatedebut, 'HH24:MI:SS'),'00:00:00')))::timestamp, - m.observateuridentite, - (select id_role from utilisateurs.t_roles tr where tr.nom_role = m.user_login LIMIT 1), - m.commentaire, - 'I' -FROM ginco_migration.vm_data_model_source as m -left JOIN ref_nomenclatures.t_nomenclatures t1 ON t1.cd_nomenclature = m.natureobjetgeo AND t1.id_type = ref_nomenclatures.get_id_nomenclature_type('NAT_OBJ_GEO') -left JOIN ref_nomenclatures.t_nomenclatures t2 ON t2.cd_nomenclature = m.obsmethode AND t2.id_type = ref_nomenclatures.get_id_nomenclature_type('METH_OBS') -left JOIN ref_nomenclatures.t_nomenclatures t3 ON t3.cd_nomenclature = m.occstatutbiologique AND t3.id_type = ref_nomenclatures.get_id_nomenclature_type('STATUT_BIO') -left JOIN ref_nomenclatures.t_nomenclatures t4 ON t4.cd_nomenclature = m.occetatbiologique AND t4.id_type = ref_nomenclatures.get_id_nomenclature_type('ETA_BIO') -left JOIN ref_nomenclatures.t_nomenclatures t5 ON t5.cd_nomenclature = m.occnaturalite AND t5.id_type = ref_nomenclatures.get_id_nomenclature_type('NATURALITE') -left JOIN ref_nomenclatures.t_nomenclatures t6 ON t6.cd_nomenclature = m.diffusionniveauprecision AND t6.id_type = ref_nomenclatures.get_id_nomenclature_type('NIV_PRECIS') -left JOIN ref_nomenclatures.t_nomenclatures t7 ON t7.cd_nomenclature = m.occstadedevie AND t7.id_type = ref_nomenclatures.get_id_nomenclature_type('STADE_VIE') -left JOIN ref_nomenclatures.t_nomenclatures t8 ON t8.cd_nomenclature = m.occsexe AND t8.id_type = ref_nomenclatures.get_id_nomenclature_type('SEXE') -left JOIN ref_nomenclatures.t_nomenclatures t9 ON t9.cd_nomenclature = m.objetdenombrement AND t9.id_type = ref_nomenclatures.get_id_nomenclature_type('OBJ_DENBR') -left JOIN ref_nomenclatures.t_nomenclatures t10 ON t10.cd_nomenclature = m.statutobservation AND t10.id_type = ref_nomenclatures.get_id_nomenclature_type('STATUT_OBS') -left JOIN ref_nomenclatures.t_nomenclatures t11 ON t11.cd_nomenclature = m.deefloutage AND t11.id_type = ref_nomenclatures.get_id_nomenclature_type('DEE_FLOU') -left JOIN ref_nomenclatures.t_nomenclatures t12 ON t12.cd_nomenclature = m.statutsource AND t12.id_type = ref_nomenclatures.get_id_nomenclature_type('STATUT_SOURCE') -left JOIN ref_nomenclatures.t_nomenclatures t13 ON t13.cd_nomenclature = m.typeinfogeoen AND t13.id_type = ref_nomenclatures.get_id_nomenclature_type('TYP_INF_GEO') -left JOIN ref_nomenclatures.t_nomenclatures t14 ON t14.cd_nomenclature = m.sensiniveau AND t13.id_type = ref_nomenclatures.get_id_nomenclature_type('SENSIBLE') - -JOIN taxonomie.taxref tax ON tax.cd_nom = m.cdnom::integer -; diff --git a/data/scripts/import_ginco/synthese_after_insert.sql b/data/scripts/import_ginco/synthese_after_insert.sql deleted file mode 100644 index 2262ad461a..0000000000 --- a/data/scripts/import_ginco/synthese_after_insert.sql +++ /dev/null @@ -1,47 +0,0 @@ - ---tri_meta_dates_change_synthese --- On garde les dates existantes dans les schémas importés --- Mais on update les enregistrements où date_insert et date_update serait restée vides -UPDATE gn_synthese.synthese SET meta_create_date = NOW() WHERE meta_create_date IS NULL; -UPDATE gn_synthese.synthese SET meta_update_date = NOW() WHERE meta_update_date IS NULL; - - ---maintenance sur la table synthese avant intersects lourd -VACUUM FULL gn_synthese.synthese; -VACUUM ANALYSE gn_synthese.synthese; -REINDEX TABLE gn_synthese.synthese; - --- Actions du trigger tri_insert_cor_area_synthese --- On recalcule l'intersection entre les données de la synthèse et les géométries de ref_geo.l_areas ---TRUNCATE TABLE gn_synthese.cor_area_synthese; -INSERT INTO gn_synthese.cor_area_synthese -SELECT - s.id_synthese, - a.id_area -FROM ref_geo.l_areas a -JOIN gn_synthese.synthese s ON public.st_intersects(s.the_geom_local, a.geom) -WHERE a.enable = true; - --- Maintenance -VACUUM FULL gn_synthese.cor_area_synthese; -VACUUM FULL gn_synthese.cor_observer_synthese; -VACUUM FULL gn_synthese.synthese; -VACUUM FULL gn_synthese.t_sources; - -VACUUM ANALYSE gn_synthese.cor_area_synthese; -VACUUM ANALYSE gn_synthese.cor_observer_synthese; -VACUUM ANALYSE gn_synthese.synthese; -VACUUM ANALYSE gn_synthese.t_sources; - -REINDEX TABLE gn_synthese.cor_area_synthese; -REINDEX TABLE gn_synthese.cor_observer_synthese; -REINDEX TABLE gn_synthese.synthese; -REINDEX TABLE gn_synthese.t_sources; - - --- On réactive les triggers du schéma synthese après avoir joué (ci-dessus) leurs actions -ALTER TABLE gn_synthese.cor_observer_synthese ENABLE TRIGGER trg_maj_synthese_observers_txt; -ALTER TABLE gn_synthese.synthese ENABLE TRIGGER tri_meta_dates_change_synthese; -ALTER TABLE gn_synthese.synthese ENABLE TRIGGER tri_insert_cor_area_synthese; -ALTER TABLE gn_synthese.synthese ENABLE TRIGGER tri_insert_calculate_sensitivity; - diff --git a/data/scripts/import_ginco/synthese_before_insert.sql b/data/scripts/import_ginco/synthese_before_insert.sql deleted file mode 100644 index 9f3bbc6d75..0000000000 --- a/data/scripts/import_ginco/synthese_before_insert.sql +++ /dev/null @@ -1,7 +0,0 @@ -ALTER TABLE gn_synthese.cor_area_synthese DISABLE TRIGGER tri_maj_cor_area_taxon; -ALTER TABLE gn_synthese.cor_observer_synthese DISABLE TRIGGER trg_maj_synthese_observers_txt; -ALTER TABLE gn_synthese.synthese DISABLE TRIGGER tri_meta_dates_change_synthese; -ALTER TABLE gn_synthese.synthese DISABLE TRIGGER tri_insert_cor_area_synthese; -ALTER TABLE gn_synthese.synthese DISABLE TRIGGER tri_insert_calculate_sensitivity; - - diff --git a/data/scripts/import_ginco/synthese_without_geom.sql b/data/scripts/import_ginco/synthese_without_geom.sql deleted file mode 100644 index 811e24e7c2..0000000000 --- a/data/scripts/import_ginco/synthese_without_geom.sql +++ /dev/null @@ -1,175 +0,0 @@ -DROP MATERIALIZED VIEW IF exists ginco_migration.vm_data_model_source_ratt CASCADE; -CREATE MATERIALIZED VIEW ginco_migration.vm_data_model_source_ratt AS -SELECT nextval('vm_data_model_source'::regclass) AS id, - m.anneerefcommune, - m.typeinfogeomaille, - convert_to_integer(m.cdnom) AS cdnom, - m.jourdatedebut, - m.statutobservation, - m.occnaturalite, - m.typeinfogeocommune, - m.cdref, - m.heuredatefin, - m.dspublique, - m.codemaille, - m.validateurnomorganisme, - m.identifiantpermanent::uuid AS identifiantpermanent, - m.versionrefmaille, - m.observateurnomorganisme, - m.versiontaxref, - m.referencebiblio, - m.typeinfogeodepartement, - m.diffusionniveauprecision, - m.codecommune, - m.denombrementmax::integer, - m.codedepartement, - m.anneerefdepartement, - m.observateuridentite, - m.deefloutage, - m.natureobjetgeo, - m.codeme, - m.orgtransformation, - m.versionme, - m.occetatbiologique, - m.occstatutbiologique, - m.identifiantorigine, - m.dateme, - m.heuredatedebut, - m.denombrementmin::integer, - m.versionen, - m.nomrefmaille, - m.statutsource, - m.occsexe, - m.nomcommune, - m.typeinfogeome, - m.codeen, - m.organismegestionnairedonnee, - m.objetdenombrement, - m.commentaire, - m.obsmethode, - m.typeen, - m.nomcite, - m.typeinfogeoen, - m.jourdatefin, - m.occstadedevie, - m.jddmetadonneedeeid, - m.sensimanuelle, - m.codecommunecalcule, - m.submission_id, - m.sensiversionreferentiel, - m.sensiniveau, - m.codedepartementcalcule, - m.deedatedernieremodification, - m.sensialerte, - m.sensidateattribution, - m.nomcommunecalcule, - m.nomvalide, - m.sensireferentiel, - m.sensible, - m.codemaillecalcule, - m.provider_id, - m.geometrie, - m.cdnomcalcule, - m.cdrefcalcule, - m.taxostatut, - m.taxomodif, - m.taxoalerte, - m.user_login - FROM ginco_migration.model_1_observation m - -- on ne prend que les JDD non supprimé car la table gn_meta.t_datasets ne comprend que les JDD non supprimé - join gn_meta.t_datasets d on d.unique_dataset_id = m.jddmetadonneedeeid::uuid - where m.geometrie is null - -; - -INSERT INTO gn_synthese.synthese ( -unique_id_sinp, -id_source, -entity_source_pk_value, -id_dataset, -id_nomenclature_geo_object_nature, -id_nomenclature_obs_technique, -id_nomenclature_bio_status, -id_nomenclature_bio_condition, -id_nomenclature_naturalness, -id_nomenclature_diffusion_level, -id_nomenclature_life_stage, -id_nomenclature_sex, -id_nomenclature_obj_count, -id_nomenclature_observation_status, -id_nomenclature_blurring, -id_nomenclature_source_status, -id_nomenclature_info_geo_type, -count_min, -count_max, -cd_nom, -nom_cite, -meta_v_taxref, -id_area_attachment, -the_geom_4326, -the_geom_point, -the_geom_local, -date_min, -date_max, -observers, -id_digitiser, -comment_context, -last_action -) -SELECT - m.identifiantpermanent::uuid, - (SELECT id_source FROM gn_synthese.t_sources WHERE name_source = 'Ginco'), - m.identifiantpermanent, - (SELECT id_dataset FROM gn_meta.t_datasets ds where ds.unique_dataset_id = COALESCE(m.jddmetadonneedeeid::uuid, NULL) LIMIT 1), - t1.id_nomenclature, - t2.id_nomenclature, - t3.id_nomenclature, - t4.id_nomenclature, - t5.id_nomenclature, - t6.id_nomenclature, - t7.id_nomenclature, - t8.id_nomenclature, - t9.id_nomenclature, - t10.id_nomenclature, - t11.id_nomenclature, - t12.id_nomenclature, - ref_nomenclatures.get_id_nomenclature('TYP_INF_GEO', '2'), - m.denombrementmin, - m.denombrementmax, - tax.cd_nom, - m.nomcite, - substring(m.versiontaxref from 1 for 50), - areas.id_area as id_area_attachment, - public.st_transform(areas.geom, 4326), - public.st_centroid(public.st_transform(areas.geom, 4326)), - areas.geom, - concat((to_char(m.jourdatedebut, 'DD/MM/YYYY'), ' ', COALESCE(to_char(m.heuredatedebut, 'HH24:MI:SS'),'00:00:00')))::timestamp, - concat((to_char(m.jourdatefin, 'DD/MM/YYYY'), ' ', COALESCE(to_char(m.heuredatedebut, 'HH24:MI:SS'),'00:00:00')))::timestamp, - m.observateuridentite, - (select id_role from utilisateurs.t_roles tr where tr.nom_role = m.user_login LIMIT 1), - m.commentaire, - 'I' -FROM ginco_migration.vm_data_model_source_ratt as m -left JOIN ref_nomenclatures.t_nomenclatures t1 ON t1.cd_nomenclature = m.natureobjetgeo AND t1.id_type = ref_nomenclatures.get_id_nomenclature_type('NAT_OBJ_GEO') -left JOIN ref_nomenclatures.t_nomenclatures t2 ON t2.cd_nomenclature = m.obsmethode AND t2.id_type = ref_nomenclatures.get_id_nomenclature_type('METH_OBS') -left JOIN ref_nomenclatures.t_nomenclatures t3 ON t3.cd_nomenclature = m.occstatutbiologique AND t3.id_type = ref_nomenclatures.get_id_nomenclature_type('STATUT_BIO') -left JOIN ref_nomenclatures.t_nomenclatures t4 ON t4.cd_nomenclature = m.occetatbiologique AND t4.id_type = ref_nomenclatures.get_id_nomenclature_type('ETA_BIO') -left JOIN ref_nomenclatures.t_nomenclatures t5 ON t5.cd_nomenclature = m.occnaturalite AND t5.id_type = ref_nomenclatures.get_id_nomenclature_type('NATURALITE') -left JOIN ref_nomenclatures.t_nomenclatures t6 ON t6.cd_nomenclature = m.diffusionniveauprecision AND t6.id_type = ref_nomenclatures.get_id_nomenclature_type('NIV_PRECIS') -left JOIN ref_nomenclatures.t_nomenclatures t7 ON t7.cd_nomenclature = m.occstadedevie AND t7.id_type = ref_nomenclatures.get_id_nomenclature_type('STADE_VIE') -left JOIN ref_nomenclatures.t_nomenclatures t8 ON t8.cd_nomenclature = m.occsexe AND t8.id_type = ref_nomenclatures.get_id_nomenclature_type('SEXE') -left JOIN ref_nomenclatures.t_nomenclatures t9 ON t9.cd_nomenclature = m.objetdenombrement AND t9.id_type = ref_nomenclatures.get_id_nomenclature_type('OBJ_DENBR') -left JOIN ref_nomenclatures.t_nomenclatures t10 ON t10.cd_nomenclature = m.statutobservation AND t10.id_type = ref_nomenclatures.get_id_nomenclature_type('STATUT_OBS') -left JOIN ref_nomenclatures.t_nomenclatures t11 ON t11.cd_nomenclature = m.deefloutage AND t11.id_type = ref_nomenclatures.get_id_nomenclature_type('DEE_FLOU') -left JOIN ref_nomenclatures.t_nomenclatures t12 ON t12.cd_nomenclature = m.statutsource AND t12.id_type = ref_nomenclatures.get_id_nomenclature_type('STATUT_SOURCE') -left JOIN ref_nomenclatures.t_nomenclatures t13 ON t13.cd_nomenclature = m.typeinfogeoen AND t13.id_type = ref_nomenclatures.get_id_nomenclature_type('TYP_INF_GEO') -JOIN taxonomie.taxref tax ON tax.cd_nom = m.cdnom::integer -JOIN ref_geo.l_areas areas ON areas.area_code = CASE WHEN (codecommune[1] is not null and codecommune[2] is null) THEN codecommune[1] - WHEN (codemaille[1] is not null and codemaille[2] is null) THEN codemaille[1] - WHEN (codedepartement[1] is not null and codedepartement[2] is null) THEN codedepartement[1] -END -WHERE ((codecommune[1] is not null and codecommune[2] is null) -OR ((codemaille[1] is not null and codemaille[2] is null) and (codecommune[1] is null or codecommune is null)) -OR ((codemaille[1] is null or codemaille is null) and (codecommune[1] is null or codecommune is null) and (codedepartement[1] is not null and codedepartement[2] is null))) -; - diff --git a/data/scripts/import_ginco/utilisateurs.sql b/data/scripts/import_ginco/utilisateurs.sql deleted file mode 100644 index 865e5962c9..0000000000 --- a/data/scripts/import_ginco/utilisateurs.sql +++ /dev/null @@ -1,124 +0,0 @@ -CREATE SCHEMA ginco_migration; - -IMPORT FOREIGN SCHEMA website FROM SERVER gincoserver INTO ginco_migration; - - - --- TODO -ALTER TABLE utilisateurs.bib_organismes ADD COLUMN desc_organsime text; - --- WARNING: champs d'organisme plus long que 100 char ne rentre pas - ---TRUNCATE utilisateurs.t_roles CASCADE; - -INSERT INTO utilisateurs.bib_organismes( - id_organisme, uuid_organisme, nom_organisme, desc_organsime) -SELECT -id::integer, -COALESCE(uuid::uuid, uuid_generate_v4()), -label, -definition -FROM ginco_migration.providers; - -SELECT setval('utilisateurs.bib_organismes_id_organisme_seq', (SELECT max(id_organisme)+1 FROM utilisateurs.bib_organismes), true); - --- INSERT INTO utilisateurs.t_roles ( --- groupe, --- identifiant, --- nom_role, --- pass, --- email, --- id_organisme, --- date_insert --- ) --- SELECT --- false, --- user_login, --- user_name, --- user_password, --- email, --- provider_id::integer, --- created_at --- FROM ginco_migration.users; --- SELECT setval('utilisateurs.t_roles_id_role_seq', (SELECT max(id_role)+1 FROM utilisateurs.t_roles), true); - --- -- insertion des groupes --- INSERT INTO utilisateurs.t_roles (groupe, nom_role, desc_role) --- SELECT true, role_label, role_definition --- FROM ginco_migration.role; - --- -- insertion des utilisateur dans les groupes --- WITH role_group AS --- ( --- SELECT --- r1.id_role as id_role_grp, --- r2.role_code --- FROM utilisateurs.t_roles r1 --- JOIN ginco_migration.role r2 ON r2.role_label = r1.nom_role --- WHERE groupe IS true --- ) --- INSERT INTO utilisateurs.cor_roles --- SELECT g.id_role_grp, t.id_role --- FROM ginco_migration.role_to_user ru --- JOIN utilisateurs.t_roles t ON t.identifiant = ru.user_login --- JOIN role_group g ON g.role_code = ru.role_code; - --- Insertion de données pour pouvoir se connecter sans CAS - --- Insertion d'un organisme factice -INSERT INTO utilisateurs.bib_organismes (nom_organisme, adresse_organisme, cp_organisme, ville_organisme, tel_organisme, fax_organisme, email_organisme, id_organisme) VALUES -('Autre', '', '', '', '', '', '', -1) -; - --- Création de deux groupe (admin et producteurs) - -INSERT INTO utilisateurs.t_roles (groupe, nom_role, desc_role) VALUES -(true,'Administrateur', 'Groupe Administrateur'), -(true, 'Producteur','Groupe producteur') -; - - --- Insertion d'un user admin/admin pour pouvoir continuer à se connecter -INSERT INTO utilisateurs.t_roles (groupe, identifiant, nom_role, prenom_role, desc_role, pass, email, date_insert, date_update, id_organisme, remarques, pass_plus) VALUES -(false, -'admin', 'Administrateur', 'test', NULL, '21232f297a57a5a743894a0e4a801fc3', - NULL, NULL, NULL, -1, 'utilisateur test à modifier', '$2y$13$TMuRXgvIg6/aAez0lXLLFu0lyPk4m8N55NDhvLoUHh/Ar3rFzjFT.') -; - --- ajout dans le groupe ginco 'administrateur' -INSERT INTO utilisateurs.cor_roles -VALUES ( -(SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS true), -(SELECT id_role FROM utilisateurs.t_roles WHERE identifiant = 'admin') -); - --- droit de connection à GeoNature -INSERT INTO utilisateurs.cor_role_app_profil -VALUES ( - (SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS true), - (SELECT id_application FROM utilisateurs.t_applications WHERE code_application = 'GN'), - 1 -); - -INSERT INTO utilisateurs.cor_role_app_profil -VALUES ( - (SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Producteur' AND groupe IS true), - (SELECT id_application FROM utilisateurs.t_applications WHERE code_application = 'GN'), - 1 -); - --- Droit de connexion à Usershub et Taxhub -INSERT INTO utilisateurs.cor_role_app_profil -VALUES ( - (SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS true), - (SELECT id_application FROM utilisateurs.t_applications WHERE code_application = 'TH'), - 6 -); - -INSERT INTO utilisateurs.cor_role_app_profil -VALUES ( - (SELECT id_role FROM utilisateurs.t_roles WHERE nom_role = 'Administrateur' AND groupe IS true), - (SELECT id_application FROM utilisateurs.t_applications WHERE code_application = 'UH'), - 6 -); - diff --git a/data/scripts/import_ginco/utils_drop_dependencies.sql b/data/scripts/import_ginco/utils_drop_dependencies.sql deleted file mode 100644 index 504113f805..0000000000 --- a/data/scripts/import_ginco/utils_drop_dependencies.sql +++ /dev/null @@ -1,127 +0,0 @@ -CREATE TABLE gn_commons.deps_saved_ddl -( - deps_id serial NOT NULL, - deps_view_schema character varying(255), - deps_view_name character varying(255), - deps_ddl_to_run text, - CONSTRAINT deps_saved_ddl_pkey PRIMARY KEY (deps_id) -); - -CREATE OR REPLACE FUNCTION gn_commons.deps_save_and_drop_dependencies( - p_view_schema character varying, - p_view_name character varying) - RETURNS void AS -$BODY$ -declare - v_curr record; -begin -for v_curr in -( - select obj_schema, obj_name, obj_type from - ( - with recursive recursive_deps(obj_schema, obj_name, obj_type, depth) as - ( - select p_view_schema, p_view_name, null::varchar, 0 - union - select dep_schema::varchar, dep_name::varchar, dep_type::varchar, recursive_deps.depth + 1 from - ( - select ref_nsp.nspname ref_schema, ref_cl.relname ref_name, - rwr_cl.relkind dep_type, - rwr_nsp.nspname dep_schema, - rwr_cl.relname dep_name - from pg_depend dep - join pg_class ref_cl on dep.refobjid = ref_cl.oid - join pg_namespace ref_nsp on ref_cl.relnamespace = ref_nsp.oid - join pg_rewrite rwr on dep.objid = rwr.oid - join pg_class rwr_cl on rwr.ev_class = rwr_cl.oid - join pg_namespace rwr_nsp on rwr_cl.relnamespace = rwr_nsp.oid - where dep.deptype = 'n' - and dep.classid = 'pg_rewrite'::regclass - ) deps - join recursive_deps on deps.ref_schema = recursive_deps.obj_schema and deps.ref_name = recursive_deps.obj_name - where (deps.ref_schema != deps.dep_schema or deps.ref_name != deps.dep_name) - ) - select obj_schema, obj_name, obj_type, depth - from recursive_deps - where depth > 0 - ) t - group by obj_schema, obj_name, obj_type - order by max(depth) desc -) loop - - insert into gn_commons.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run) - select p_view_schema, p_view_name, 'COMMENT ON ' || - case - when c.relkind = 'v' then 'VIEW' - when c.relkind = 'm' then 'MATERIALIZED VIEW' - else '' - end - || ' ' || n.nspname || '.' || c.relname || ' IS ''' || replace(d.description, '''', '''''') || ''';' - from pg_class c - join pg_namespace n on n.oid = c.relnamespace - join pg_description d on d.objoid = c.oid and d.objsubid = 0 - where n.nspname = v_curr.obj_schema and c.relname = v_curr.obj_name and d.description is not null; - - insert into gn_commons.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run) - select p_view_schema, p_view_name, 'COMMENT ON COLUMN ' || n.nspname || '.' || c.relname || '.' || a.attname || ' IS ''' || replace(d.description, '''', '''''') || ''';' - from pg_class c - join pg_attribute a on c.oid = a.attrelid - join pg_namespace n on n.oid = c.relnamespace - join pg_description d on d.objoid = c.oid and d.objsubid = a.attnum - where n.nspname = v_curr.obj_schema and c.relname = v_curr.obj_name and d.description is not null; - - insert into gn_commons.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run) - select p_view_schema, p_view_name, 'GRANT ' || privilege_type || ' ON ' || table_schema || '.' || table_name || ' TO ' || grantee - from information_schema.role_table_grants - where table_schema = v_curr.obj_schema and table_name = v_curr.obj_name; - - if v_curr.obj_type = 'v' then - insert into gn_commons.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run) - select p_view_schema, p_view_name, 'CREATE VIEW ' || v_curr.obj_schema || '.' || v_curr.obj_name || ' AS ' || view_definition - from information_schema.views - where table_schema = v_curr.obj_schema and table_name = v_curr.obj_name; - elsif v_curr.obj_type = 'm' then - insert into gn_commons.deps_saved_ddl(deps_view_schema, deps_view_name, deps_ddl_to_run) - select p_view_schema, p_view_name, 'CREATE MATERIALIZED VIEW ' || v_curr.obj_schema || '.' || v_curr.obj_name || ' AS ' || definition - from pg_matviews - where schemaname = v_curr.obj_schema and matviewname = v_curr.obj_name; - end if; - - execute 'DROP ' || - case - when v_curr.obj_type = 'v' then 'VIEW' - when v_curr.obj_type = 'm' then 'MATERIALIZED VIEW' - end - || ' ' || v_curr.obj_schema || '.' || v_curr.obj_name; - -end loop; -end; -$BODY$ - LANGUAGE plpgsql VOLATILE - COST 100; - - -CREATE OR REPLACE FUNCTION gn_commons.deps_restore_dependencies( - p_view_schema character varying, - p_view_name character varying) - RETURNS void AS -$BODY$ -declare - v_curr record; -begin -for v_curr in -( - select deps_ddl_to_run - from gn_commons.deps_saved_ddl - where deps_view_schema = p_view_schema and deps_view_name = p_view_name - order by deps_id desc -) loop - execute v_curr.deps_ddl_to_run; -end loop; -delete from gn_commons.deps_saved_ddl -where deps_view_schema = p_view_schema and deps_view_name = p_view_name; -end; -$BODY$ - LANGUAGE plpgsql VOLATILE - COST 100; - diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md index 951a92896a..09528ce7e8 100644 --- a/docs/CHANGELOG.md +++ b/docs/CHANGELOG.md @@ -4,28 +4,58 @@ CHANGELOG 2.15.0 (unreleased) ------------------- -TH v2 (intégré à GN et son module Admin), Import v3 (multi-destination, import Occhab et intégré au coeur de GN), authentification externe +- Nouvelle version (2.0.0) et déplacement de TaxHub dans GeoNature +- Le module d'import est maintenant intégré dans GeoNature +- La fiche taxon a été revu +- **🚀 Nouveautés** -- Intégration de TaxHub à GeoNature (#3150 + voir la note de version de TaxHub 2.0.0 - LIEN) -- Intégration du module Import dans le coeur de GeoNature et refonte de celui-ci pour qu'il puisse importer dans d'autres modules que Synthèse (https://github.com/PnX-SI/gn_module_import/issues/303) -- Ajout de la possibilité d'importer des données depuis des fichiers vers le module Occhab -- Autres évolutions du module Import à mentionner ici... (évolution des controles ? Import GeoJSON ? Graphiques génériques ? Meilleure gestion des formats de date ? Amélioration export PDF ? Import multi-JDD ?) -- Ajout de tests frontend automatisés sur le module Import -- Evolution du fonctionnement des permissions sur le module Import pour gérer son nouveau fonctionnement multi-destination (Action C ajoutée au module Synthèse, JDD à associer aux modules de destination...). Renvoyer vers la doc sur le sujet ? -- Intégration et enrichissement de la documentation du module Import : https://docs.geonature.fr/xxxxxx -- Amélioration export Occhab -- Possibilité de se connecter à GeoNature avec d'autres fournisseurs d'identité (#3111, https://github.com/PnX-SI/UsersHub-authentification-module/pull/93) +- [TaxHub] Intégration de TaxHub ([2.0 Release Note](https://github.com/PnX-SI/TaxHub/releases/tag/2.0.0)) à GeoNature (#3150) + - Plus besoin d'un web-service dédiée, la gestion de TaxHub est maintenant intégré à +- [Import] Refonte et intégration du module d'import dans GeoNature (#2833). + - Ajout de l'import vers OccHab + - Possibilité d'importer les données dans plusieurs modules (ou Destination). Suivre la documentation dédiée à ce sujet (mettre lien). + - Evolution des permissions : la création d'un import dépend d'un C dans IMPORT et d'un C dans le module de destination (synthese et/ou occhab) (Voir documention ) + - Plusieurs améliorations sur : les contrôles des données, la génération du rapport, les graphiques produits, de nouveaux tests frontends, etc. +- [Authentification] Possibilité de se connecter à GeoNature avec d'autres fournisseurs d'identité (#3111, https://github.com/PnX-SI/UsersHub-authentification-module/pull/93) + - Plusieurs protocoles de connexions intégrés : OAuth, CAS INPN, UserHub + - Possibilité de se connecter sur d'autres GeoNature + - Voir la documentation pour plus de détails (ajouter lien) +- [Synthèse] Evolution de la fiche taxon (#3191, #3205, #3174,#3175) + - Affichage du profil d'un taxon + - Affichage de la synthèse géographique d'un taxon + - Affichage du statut de protection du taxon + - Affichage des informations taxonomiques présentes dans TaxRef + +- [Synthèse] Possibilité de partager une URL menant directement à un onglet (détails, taxonomie, discussion, validation, etc.) de la fiche d'une observation (#3169) +- [Accueil] Ajout d'un bloc `Discussions` sur la page d'accueil (#3138) + - Affichage des discussions dans lesquels l'utilisateur participé, fait l'enregistrement ou est à l'orgine de l'observation. +- [Occhab] Remplacement du champ `is_habitat_complex` par le nouveau champ du standart `id_nomenclature_type_habitat` (voir MosaiqueValue dans la version 2 standard INPN) (#3125) +- [Occhab] Affichage de l'UUID d'une station dans l'interface (#3247) +- [Méta-données]Il est maintenant possible de supprimer un cadre d'acquisition vide (#3224) +- [Occtax] Ajout du nom de lieu dans le détail d'un relevé (#3145) +- [RefGeo] De nouvelles mailles INPN sur la France métropolitaine (2km, 20km, 50km) sont disponibles (https://github.com/PnX-SI/RefGeo/releases/tag/1.5.4): +``` +geonature db upgrade ref_geo_inpn_grids_2@head # Insertion des mailles 2x2km métropole, fournies par l’INPN +geonature db upgrade ref_geo_inpn_grids_20@head # Insertion des mailles 20x20km métropole, fournies par l’INPN +geonature db upgrade ref_geo_inpn_grids_50@head # Insertion des mailles 50x50km métropole, fournies par l’INPN +``` **🐛 Corrections** - Correction de l'URL des modules externes dans le menu latéral (#3093) +- Correction des erreurs d'exécution de la commande `geonature sensitivity info` (#3216) +- Correction du placement des tooltips pour le composant `ng-select`(#3142) +- Correction de l'export Occhab avec des champs additionnels vides (#2837) +- Correction du bug d'édition d'une géométrie sur une carte Leaflet (#3196) +- Lancement de `pytest` sans _benchmark_ ne nécessite plus l'ajout de `--benchmark-skip` (#3183) + + **⚠️ Notes de version** Si vous mettez à jour GeoNature : - - L'application TaxHub a été integrée dans le module Admin de GeoNature (voir documentation TH) et accessible depuis le menu latéral : - Les permissions basées sur les profils 1-6 ont été rapatriées et adaptées dans le modèle de permissions de GeoNature. TaxHub est désormais un "module" GeoNature et dispose des objets de permissions `TAXONS`, `THEMES`, `LISTES` et `ATTRIBUTS` (voir doc GeoNature pour la description des objets). Les personnes ayant anciennement des droits 6 dans TaxHub ont toutes les permissions sur les objets pré-cités. Les personnes ayant des droits inférieurs à 6 et ayant un compte sur TaxHub ont maintenant des permissions sur l'objet `TAXON` (voir et éditer des taxons = ajouter des médias et des attributs) @@ -33,24 +63,44 @@ Si vous mettez à jour GeoNature : - Le paramètre `API_TAXHUB` est désormais obsolète (déduit de `API_ENDPOINT`) et peut être retiré du fichier de configuration de GeoNature - Si vous utilisez Occtax-mobile, veillez à modifier le paramètre `taxhub_url` du fichier `/geonature/backend/media/mobile/occtax/settings.json`, pour mettre la valeur `/api/taxhub>` - Une redirection Apache automatique de l'URL de TaxHub et des médias est disponible à l'adresse suivante : XXXX - - ATLAS a tester -> modification URL des médias - - suppression de la branche alembic taxhub : `geonature db downgrade taxhub@base` - - désinstaller TH de votre serveur ? + - Les médias ont été déplacés du dossier `/static/medias/` vers `/media/taxhub/`. + Les URL des images vont donc changer. Pour des questions de rétrocompatibilité avec d'autres outils (GeoNature-atlas ou GeoNature-citizen par exemple), vous pouvez définir des règles de redirection pour les médias dans le fichier de configuration Apache de TaxHub : + ``` + # Cas où TaxHub et GeoNature sont sur le même sous-domaine + RewriteEngine on + RewriteRule "^/taxhub/static/medias/(.+)" "/geonature/api/medias/taxhub/$1" [R,L] + # Cas où TaxHub et GeoNature ont chacun un sous-domaine + RewriteEngine on + RewriteRule "^/static/medias/(.+)" "https://geonature./api/medias/taxhub/$1" [R,L] + ``` + - L'application TaxHub n'est plus nécessaire, si vous voulez utilisez TaxHub uniquement au travers de GeoNature, effectuer les actions suivantes : + - Suppression de la branche alembic taxhub : `geonature db downgrade taxhub-standalone@base` + + - Les commandes de taxhub sont maitenant intégrées dans celles de GeoNature. + ```shell + geonature taxref info # avant flask taxref info + geonature taxref enable-bdc-statut-text # avant flask taxref enable-bdc-statut-text + geonature taxref migrate-to-v17 # flask taxref migrate-to-v17 + ``` + - L'intégration de TaxHub dans GeoNature entraine la suppression du service systemd et la conf apache spécifique à TaxHub. Les logs de TH sont également centralisés dans le fichier de log de GeoNature + - **⚠️Important⚠️** ! Ajouter l'extension `ltree` à votre base de données : `sudo -n -u postgres -s psql -d $db_name -c "CREATE EXTENSION IF NOT EXISTS ltree;"` - Le module Import a été intégré dans le coeur de GeoNature - si vous aviez installé le module externe Import, XXXXX - si vous n'aviez pas installé le module externe Import, il sera disponible après la mise à jour vers cette nouvelle version de GeoNature. Vous pouvez configurer les permissions de vos utilisateurs si vous souhaitez qu'ils y accédent - la gestion des permissions et des JDD associés aux module a évolué. La migration est gérée automatiquement lors de la mise à jour pour garantir un fonctionnement identique. Voir la documentation (XXXXXXXXX) pour en savoir plus + - supprimer le dossier import, il ne sera plus utiliser dans la 2.15 + - reporter la configuration IMPORT dans le fichier de configuration de GeoNature. (dans le bloc import, voir dans le fichier default toml) - La synchronisation avec le service MTD de l'INPN n'est plus intégrée dans le code de GeoNature, elle a été déplacée dans un module externe : https://github.com/PnX-SI/mtd_sync - Si vous l'utilisiez, supprimer les variables de configuration suivantes du fichier `geonature_config.toml` : - `XML_NAMESPACE`, `MTD_API_ENDPOINT` - toutes les variables dans `[CAS_PUBLIC]`, `[CAS]`, `[CAS.CAS_USER_WS]`, `[MTD]` - `ID_USER_SOCLE_1` et `ID_USER_SOCLE_2` dans la section `BDD` - - Installez le nouveau module externe à l'aide de la commande : `pip install git+https://github.com/PnX-SI/mtd_sync` - - Remplissez la configuration dans un fichier `mtd_sync.toml` + 2.14.2 (2024-05-28) +------------------- **🚀 Nouveautés** diff --git a/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.html b/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.html index 16e13f0321..57f696da1a 100644 --- a/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.html +++ b/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.html @@ -1,7 +1,4 @@
- - Le profil est calculé à partir des observations considérées comme valides -
+ + {{ 'FicheTaxon.MessageProfil' | translate }} +
diff --git a/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.ts b/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.ts index b8e0eeb828..56a044c0e6 100644 --- a/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.ts +++ b/frontend/src/app/syntheseModule/taxon-sheet/tab-profile/tab-profile.component.ts @@ -14,25 +14,25 @@ import { TaxonSheetService } from '../taxon-sheet.service'; const INDICATORS: Array = [ { - name: 'observation(s) valide(s)', + name: 'observation(s) valide(s)*', matIcon: 'search', field: 'count_valid_data', type: 'number', }, { - name: 'Première observation', + name: 'Première observation*', matIcon: 'schedule', field: 'first_valid_data', type: 'date', }, { - name: 'Dernière observation', + name: 'Dernière observation*', matIcon: 'search', field: 'last_valid_data', type: 'date', }, { - name: "Plage d'altitude(s)", + name: "Plage d'altitude(s)*", matIcon: 'terrain', field: ['altitude_min', 'altitude_max'], unit: 'm', diff --git a/frontend/src/app/syntheseModule/taxon-sheet/taxon-sheet.route.service.ts b/frontend/src/app/syntheseModule/taxon-sheet/taxon-sheet.route.service.ts index b045a9e1c2..8676c6bb89 100644 --- a/frontend/src/app/syntheseModule/taxon-sheet/taxon-sheet.route.service.ts +++ b/frontend/src/app/syntheseModule/taxon-sheet/taxon-sheet.route.service.ts @@ -20,7 +20,7 @@ interface Tab { export const ALL_TAXON_SHEET_ADVANCED_INFOS_ROUTES: Array = [ { - label: 'Synthèse Géographique', + label: 'Synthèse géographique', path: 'geographic_overview', component: TabGeographicOverviewComponent, configEnabledField: null, // make it always available ! diff --git a/frontend/src/assets/i18n/fr.json b/frontend/src/assets/i18n/fr.json index 0d1e5c965c..f25221cf29 100644 --- a/frontend/src/assets/i18n/fr.json +++ b/frontend/src/assets/i18n/fr.json @@ -271,6 +271,9 @@ "next": "Voir le média suivant", "previous": "Voir le média précédent" }, + "FicheTaxon": { + "MessageProfil": "* Ce profil est calculé sur les observations considérées valides présentes dans la Synthèse. Il permet de faciliter la validation de nouvelles observations en les comparant à la répartition spatiale, altitudinale et phénologique des observations valides." + }, "Import": { "DestinationLabel": "Destination", "DestinatinationDatasetLabel": "Jeu de données", diff --git a/install/migration/migration.sh b/install/migration/migration.sh index 1ced113c5b..2af2c308c7 100755 --- a/install/migration/migration.sh +++ b/install/migration/migration.sh @@ -133,6 +133,11 @@ cd "${newdir}/install" ./01_install_backend.sh source "${newdir}/backend/venv/bin/activate" +# before 2.15 - If gn_module_import module previously installed +if [ -f "${olddir}"/frontend/external_modules/import ];then + rm "${olddir}"/frontend/external_modules/import +fi + echo "Installation des modules externes …" # Modules before 2.11 if [ -d "${olddir}/external_modules/" ]; then @@ -264,36 +269,39 @@ deactivate if [ -f "/etc/systemd/system/taxhub.service" ]; then sudo systemctl stop taxhub sudo systemctl disable taxhub - sudo rm /etc/systemd/system/taxhub + sudo rm /etc/systemd/system/taxhub.service sudo systemctl daemon-reload sudo systemctl reset-failed fi # before 2.15 - Suppression de l'application Taxhub et de la configuration apache if [ -f "/etc/apache2/sites-available/taxhub.conf" ]; then - rm /etc/apache2/sites-available/taxhub.conf + sudo rm /etc/apache2/sites-available/taxhub.conf fi if [ -f "/etc/apache2/sites-available/taxhub-le-ssl.conf" ]; then - rm /etc/apache2/sites-available/taxhub-le-ssl.conf - rm -r /var/log/taxhub/ + sudo rm /etc/apache2/sites-available/taxhub-le-ssl.conf + sudo rm -r /var/log/taxhub/ fi if [ -f "/etc/apache2/conf-available/taxhub.conf" ]; then - rm /etc/apache2/conf-available/taxhub.conf + sudo rm /etc/apache2/conf-available/taxhub.conf fi if [ -f "/etc/apache2/conf-available/taxhub-le-ssl.conf" ]; then - rm /etc/apache2/conf-available/taxhub-le-ssl.conf - rm -r /var/log/taxhub/ + sudo rm /etc/apache2/conf-available/taxhub-le-ssl.conf + sudo rm -r /var/log/taxhub/ fi # before 2.15 - Suppression de l'application Taxhub et rapatriement des médias TaxHub if [ ! -d "${newdir}/backend/media/taxhub" ];then mkdir -p "${newdir}/backend/media/taxhub" - cp -r "${TAXHUB_DIR}"/static/medias/* "${newdir}"/backend/media/taxhub/ + if [ -d "${TAXHUB_DIR}"/static/medias ]; then + cp -r "${TAXHUB_DIR}"/static/medias/* "${newdir}"/backend/media/taxhub/ + fi fi + sudo apachectl restart echo "Migration terminée" diff --git a/pyproject.toml b/pyproject.toml index 16e27955b2..68fe662ca5 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,25 +1,20 @@ [tool.pytest.ini_options] minversion = "6.0" -testpaths = [ - "backend/geonature/tests/", -] +testpaths = ["backend/geonature/tests/"] addopts = "--benchmark-skip" [tool.coverage.run] source = [ - "backend/geonature/", - "contrib/occtax/backend/occtax/", - "contrib/gn_module_occhab/backend/gn_module_occhab/", - "contrib/gn_module_validation/backend/gn_module_validation/", -] -omit = [ - "*/tests/*", - "*/migrations/*", + "backend/geonature/", + "contrib/occtax/backend/occtax/", + "contrib/gn_module_occhab/backend/gn_module_occhab/", + "contrib/gn_module_validation/backend/gn_module_validation/", ] +omit = ["*/tests/*", "*/migrations/*"] [tool.black] line-length = 100 -exclude =''' +exclude = ''' ( /( \.eggs # exclude a few common directories in the