Skip to content

Latest commit

 

History

History
315 lines (252 loc) · 17.6 KB

README.md

File metadata and controls

315 lines (252 loc) · 17.6 KB

Email Sophan/Rich asking for Windows Code signing certificate
They send me 1st email
"To generate your private key and create a certificate request for your InCommon Code Signing certificate:"
with associated link I click (MUST USE Windows Internet Explorer), I choose 4096 for strength.
They send me 2nd email
"The InCommon code signing certificate for Frank Morgan [email protected] has been issued"
with associated link I click (MUST USE Windows Internet Explorer on same machine 1st email link was clicked on) A Certificate file with included private key (generated by CertAuthority) will be downloaded automatically, I choose to SAVE.
I open Internet Explorer->Tools->'internet options...'->Content tab->Certificates btn->Personal tab->IMPORT btn->
'Personal'->detail says pkcs7->Finish, Cert now appears in IExplorer->Tools->'internet options...'->Content tab->Certificates panel Cert with private key now in IExplorer, export to pkcs12/pfx IExplorer->Tools->'internet options...'->Content tab->Certificates->choose code_sign_cert just imported-> Export...->export with key->choose pfx/pkcs12,extended properties and all certs (leave "delete private key: unchecked)-> Enter password from /usr/local/deploy/VCELL_UCONN_....pswd.txt->save to file with .pfx extension

INFO build containers info (do this later): builds the Docker images (1-9 below) and pushes them into a Docker registry (e.g. namespace = "vcell-docker.cam.uchc.edu:5000") with the image tag derived from the Git commit hash at build time (e.g. tag = "392af4d"). The vcell-batch Singularity image (item 10 below) is built from the vcell-batch Docker image for use within a HPC environment.

  1. vcell-api => docker image in registry (api)
  2. vcell-db => docker image in registry (db)
  3. vcell-sched => docker image in registry (sched)
  4. vcell-submit => docker image in registry (submit)
  5. vcell-mongodb => docker image in regsitry (mongodb)
  6. vcell-activemqint => docker image in registry (activemqint)
  7. vcell-activemqsim => docker image in registry (activemqsim)
  8. vcell-clientgen => docker image in registry (generates Install4J installers during deployment)
  9. vcell-batch => docker image in registry (for batch processing, includes java code and Linux solver executables)
  10. vcell-batch.img => singularity image in ./singularity-vm/ (built from vcell-batch docker image)
  11. vcell-opt => => docker image in registry (opt)

(../swarm/README.md): build Singularity image for Linux solvers

builds a Singularity image named ./singularity-vm/${namespace}vcell-batch${tag}.img from the Docker image ${namespace}/vcell-batch:${tag}

0. Choose VCell solvers version to include in build (Change only if new vcell-solvers build was committed)

Tag vcell-solvers commit (if not already)

git clone https://github.com/virtualcell/vcell-solvers.git
cd vcell-solvers
//List current tags
git tag
//Create new tag by increment latest tag from list, must start with v, rest all digits, e.g. "git tag v0.0.22"
theNewTag=vx.x.x
git tag ${theNewTag)
//Push new tag to github, e.g. "git push origin v0.0.22"
git push origin ${theNewTag)
//github will alert travisci(mac,linux) and appveyor(win) to start building the tagged commit for client local solvers
//they will send their archived solvers to github and add to tagged commit (win64.zip,linux64.tgz,mac64.tgz)

----Wait for the win,linux and mac archives to appear on github (solvers built by travisci,appveyor, used on VCell Local CLIENTS) under the new commit tag (browse to https://github.com/virtualcell/vcell-solvers/releases/tag/${theNewTag})
----Wait for ${theNewTag} to appear on dockerhub (builds solvers used on VCell SERVER), (browse to https://hub.docker.com/r/virtualcell/vcell-solvers/tags)

Check Solver build finished (if necessary) https://hub.docker.com/r/virtualcell/vcell-solvers/builds/ and check tag exists https://hub.docker.com/r/virtualcell/vcell-solvers/tags/
See vcell-solvers README.md
--Edit {vcellroot}/docker/build/Dockerfile-batch-dev Dockerfile-batch-dev
----theTag=the tag that was created during a separate vcell-solver commit process (See https://github.com/virtualcell/vcell-solvers.git, README.md)
----Get the tag from dockerhub, pick the tag you want, usually latest
----Change line: "FROM virtualcell/vcell-solvers:{theTag}" to be proper tag number

Edit {vcellroot}/vcell-core/pom.xml vcell-core pom
----theTag= created as above
----Get the tag from [github][https://github.com/virtualcell/vcell-solvers/tags), pick the tag you want, usually latest
----Change all lines "https://github.com/virtualcell/vcell-solvers/releases/download/v{theTag}/{linux,win,mac}64.tgz", pick the tag you want, usually latest

MUST commit any changes made during above to github on the VCell project

1. Build VCell containers (from {vcell_project_dir}/docker/build/ directory)

Login to vcell-node1.cam.uchc.edu as user vcell

Get VCell project, login to {theBuildHost} as user 'vcell'

theBuildHost=vcell-node1.cam.uchc.edu
ssh vcell@${theBuildHost}
cd /opt/build
sudo rm -rf vcell (if necessary)
git clone https://github.com/virtualcell/vcell.git
---if want a branch or specific commit then do:  
      cd vcell; git checkout {commitHash#}  
      git name-rev HEAD (to display which branch you're on)  
      git branch (show commit on, has *)  
cd vcell/docker/build

Build ALL containers (sets the Docker tags to first 7 characters of Git commit hash)

export VCELL_TAG=`git rev-parse HEAD | cut -c -7`
theRegistryHost=vcell-docker.cam.uchc.edu
export VCELL_REPO_NAMESPACE=${theRegistryHost}:5000/schaff
echo $VCELL_TAG $VCELL_REPO_NAMESPACE
./build.sh all $VCELL_REPO_NAMESPACE $VCELL_TAG

IGNORE (cd singularity-vm ; rm  vcell-docker.cam.uchc.edu_5000_schaff_vcell-batch_xxxxx.img ; cp /opt/build/frm/new2.img vcell-docker.cam.uchc.edu_5000_schaff_vcell-batch_xxxxx.img)

Info: build the containers (e.g. vcell-docker.cam.uchc.edu:5000/schaff/vcell-api:f18b7aa) and upload to a private Docker registry (e.g. vcell-docker.cam.uchc.edu:5000).
A Singularity image for vcell-batch is also generated and stored locally (VCELL_ROOT/docker/build/singularity-vm) as no local Singularity repository is available yet. Singularity image is downloaded by solver .slurm.sub script to the server file system and invoked for numerical simulation on the HPC cluster.

2. Deploy vcell using docker swarm mode

//Requirements during deployment while building the vcell-clientgen container
it is assumed that during deployment there is a directory (/usr/local/deploy/.install4j6/jres) which is mapped to the VOLUME /jre
defined in Dockerfile-clientgen-dev and used inside vcell-clientgen container, the vcset up "build secrets" directory
(e.g. /usr/local/deploy/.install4j6/jres/ on vcell-node1.cam.uchc.edu) Java jre bundles which are compatible with installed version of Install4J
-----/usr/local/deploy/.install4j6/jres/linux-amd64-1.8.0_66.tar.gz
-----/usr/local/deploy/.install4j6/jres/macosx-amd64-1.8.0_66.tar.gz
-----/usr/local/deploy/.install4j6/jres/windows-x86-1.8.0_66.tar.gz
-----/usr/local/deploy/.install4j6/jres/linux-x86-1.8.0_66.tar.gz
-----/usr/local/deploy/.install4j6/jres/windows-amd64-1.8.0_66.tar.gz

Build VCell and deploy to production servers (from ./docker/swarm/ directory)

NOTE: current partition of SLURM for vcell is found by this command run "sinfo -N -h -p vcell2 --Format='nodelist'" (must run on vcell-service, or other slurm node)
Assume step 1 has completed successfully

login to vcell-node1 as user 'vcell'

cd /opt/build/vcell/docker/swarm

Run the following bash commands in your terminal (sets the Docker tags to first 7 characters of Git commit hash)

export VCELL_TAG=`git rev-parse HEAD | cut -c -7`
export VCELL_REPO_NAMESPACE=vcell-docker.cam.uchc.edu:5000/schaff

Determine build number for deploying
-----a. Get currently deployed client

echo Alpha `curl --silent http://vcell.org/webstart/Alpha/updates.xml | xmllint --xpath '//updateDescriptor/entry/@newVersion' - | awk '{print $1;}'` && \
echo Beta `curl --silent http://vcell.org/webstart/Beta/updates.xml | xmllint --xpath '//updateDescriptor/entry/@newVersion' - | awk '{print $1;}'` && \
echo Rel `curl --silent http://vcell.org/webstart/Rel/updates.xml | xmllint --xpath '//updateDescriptor/entry/@newVersion' - | awk '{print $1;}'`

-----b. Create final build number
if deploy server only, theBuildNumber=(number from above)
----theBuildNumber=number from above (the 4th digit), e.g. Alpha newVersion="7.0.0.51" theBuildNumber=51
If deploying client, theBuildNumber=(number from above) + 1
----theBuildNumber= 1 + number from above (the 4th digit), e.g. Alpha newVersion="7.0.0.51" theBuildNumber=52
edit 'VCELL_BUILD='theBuildNumber in the appropriate site block below

To create deploy configuration file, Choose the site block being deployed
Info: create deploy configuration file (e.g. Test 7.0.0 build 8) file for server. Note that some server configuration is hard-coded in the serverconfig-uch.sh script.

MUST EDIT VCELL_BUILD=${theBuildNumber} to be correct

REL

export VCELL_VERSION=7.0.0 VCELL_BUILD=${theBuildNumber} VCELL_SITE=rel
export MANAGER_NODE=vcellapi.cam.uchc.edu
export VCELL_INSTALLER_REMOTE_DIR="/share/apps/vcell3/apache_webroot/htdocs/webstart/Rel"
export VCELL_CONFIG_FILE_NAME=server_${VCELL_SITE}_${VCELL_VERSION}_${VCELL_BUILD}_${VCELL_TAG}.config
./serverconfig-uch.sh $VCELL_SITE $VCELL_REPO_NAMESPACE \
  $VCELL_TAG $VCELL_VERSION $VCELL_BUILD $VCELL_CONFIG_FILE_NAME

BETA (not used)

export VCELL_VERSION=7.0.0 VCELL_BUILD=10 VCELL_SITE=beta
export MANAGER_NODE=vcellapi-beta.cam.uchc.edu
export VCELL_INSTALLER_REMOTE_DIR="/share/apps/vcell3/apache_webroot/htdocs/webstart/Beta"
export VCELL_CONFIG_FILE_NAME=server_${VCELL_SITE}_${VCELL_VERSION}_${VCELL_BUILD}_${VCELL_TAG}.config
./serverconfig-uch.sh $VCELL_SITE $VCELL_REPO_NAMESPACE \
  $VCELL_TAG $VCELL_VERSION $VCELL_BUILD $VCELL_CONFIG_FILE_NAME

ALPHA

export VCELL_VERSION=7.2.0 VCELL_BUILD=68 VCELL_SITE=alpha
export MANAGER_NODE=vcellapi-beta.cam.uchc.edu
export VCELL_INSTALLER_REMOTE_DIR="/share/apps/vcell3/apache_webroot/htdocs/webstart/Alpha"
export VCELL_CONFIG_FILE_NAME=server_${VCELL_SITE}_${VCELL_VERSION}_${VCELL_BUILD}_${VCELL_TAG}.config
./serverconfig-uch.sh $VCELL_SITE $VCELL_REPO_NAMESPACE \
  $VCELL_TAG $VCELL_VERSION $VCELL_BUILD $VCELL_CONFIG_FILE_NAME

TEST

export VCELL_VERSION=7.0.0 VCELL_BUILD=7 VCELL_SITE=test2
export MANAGER_NODE=vcellapi-beta.cam.uchc.edu
export VCELL_INSTALLER_REMOTE_DIR="/share/apps/vcell3/apache_webroot/htdocs/webstart/Test2"
export VCELL_CONFIG_FILE_NAME=server_${VCELL_SITE}_${VCELL_VERSION}_${VCELL_BUILD}_${VCELL_TAG}.config
./serverconfig-uch.sh $VCELL_SITE $VCELL_REPO_NAMESPACE \
  $VCELL_TAG $VCELL_VERSION $VCELL_BUILD $VCELL_CONFIG_FILE_NAME

Finalize Deploy
configuration and Docker images, generate client installers and deploy server as a Docker stack in swarm mode. Note that the Docker and Singularity images and docker-compose.yml file remain independent of the deployed configuration. Only the final deployed configuration file vcellapi.cam.uchc.edu:/usr/local/deploy/config/$VCELL_CONFIG_FILE_NAME contains server dependencies. get platform installer from web site (e.g. http://vcell.org/webstart/Test/VCell_Test_macos_7.0.0_7_64bit.dmg)

Choose 1 of the following:

CLIENT and SERVER deploy commands (may request password at some point)

rm -rf ./generated_installers
./generate_installers.sh ./${VCELL_CONFIG_FILE_NAME}


(cp C:\Users\frm\VCellTrunkGitWorkspace2\vcell\vcell-imagej-helper\target\vcell-imagej-helper-0.0.1-SNAPSHOT.jar vcell@vcell-node1:/share/apps/vcell3/apache_webroot/htdocs/webstart/vcell-imagej-helper-Alpha_Version_7_2_0_build_36.jar __replace numbers__)
(cp C:\Users\frm\VCellTrunkGitWorkspace2\vcell\vcell-client\src\main\resources\vcell_dynamic_properties.csv vcell@vcell-node1:/share/apps/vcell3/apache_webroot/htdocs/webstart/vcell_dynamic_properties.csv)


//
//Windows installer only:
//Symantec false-positve whitelist report
//
//from vcell-node1:/opt/build/vcell/docker/swarm/
rm /share/apps/vcell3/apache_webroot/htdocs/webstart/symantec_whitelist/*
cp ./generated_installers/VCell_{Alpha,Rel}_windows-x64_7_2_0_${VCELL_BUILD}_64bit.exe /share/apps/vcell3/apache_webroot/htdocs/webstart/symantec_whitelist/
// Use web browser and then goto http://vcell.org/webstart/symantec_whitelist/
// Copy link location from web browser for use of semantic request
//Open https://submit.symantec.com/false_positive/ in the browser
// Choose "Incorrectly Detected by Symantic" Tab
Submission Type -> Provide Direct Download URL
A1. When downloading or uploading a file
B2. Symantec Endpoint Protection 14.x
C1. Download/File Insight (Reputation Based Detection) e.g. WS.Reputation.1,Suspicious.Insight, WS.Malware.*
name of detection: WS.Reputation.1
Provide direct download URL (e.g. http://vcell.org/webstart/symantec_whitelist/VCell_Rel_windows-x64_7_2_0_40_64bit.exe)
name of software: VCell
Description:
Malware, Insight Network Threat
Auto protect scan
Software installer created with Install4J, when created VCell installer is downloaded (to Windows 7 Enterprise client) from
http://vcell.org/webstart (using Windows Internet Explorer)and run it is immediately quarantined with WS.Reputation.1.
Distributed by the University of Connecticut Health Center, Farmington, CT 06032

//
//MacOS installer only:
//Installer App Notarizing
//
// These steps are only done once, not needed every time
//-------------------------
// Login apple-id (Apple ID account page)
     Signup for 2 factor authentication (do this only once)->
     Under ‘Security’ section Create app specific password called “altoolpw”  (only visible if you are using 2 fact auth)->
//Store ‘altoolpw’ in local keychain (do this only once)
     xcrun altool --list-providers -u "[email protected]" -p "@keychain:altoolpw"
//Check ‘altoolpw’ 
     xcrun altool --list-providers -u "[email protected]" -p "@keychain:altoolpw"
//---------------------------
//
//
// Start Notarize task here
// Delete old notarized app file
     rm /Users/vcellbuild/Downloads/VCell_*.dmg
//Copy MacOS installer built by install4j (to Mac with xcode and credentials installed)
     scp vcell@vcell-node1:/opt/build/vcell/docker/swarm/generated_installers/VCell_{Rel,Alpha}_macos_7_2_0_{buildnum}_64bit.dmg /Users/vcellbuild/Downloads
//Notarize request for the VCell MacOS .dmg created by install4j
     xcrun altool --notarize-app --primary-bundle-id "org.vcell.i4jmacos" --username "[email protected]" --password "@keychain:altoolpw" --file /Users/vcellbuild/Downloads/VCell_{Rel,Alpha}_macos_7_2_0_{buildnum}_64bit.dmg
//Save the requestUUID if the process doesn’t fail
     RequestUUID = fad728cf-47f0-493b-b666-f11aa61932c1 (failed)
			       e6397f48-38a9-4285-adea-9e2221fe74d0 (failed, but fewer errors)
				7ff1cb03-ed6a-4572-8208-76cd59039db3 (success)
//Check Notarization status (wait ~5 minutes)
     xcrun altool --notarization-history 0 -u "[email protected]" -p "@keychain:altoolpw"
// If there is a problem/failure - Get full Notarization log url and view in web browser (if notarization status failed, will contain web address of error log)
     xcrun altool --notarization-info fad728cf-47f0-493b-b666-f11aa61932c1 -u "[email protected]" -p "@keychain:altoolpw"
//Staple Ticket to Software (if notarization status has ‘success’)
     xcrun stapler staple /Users/vcellbuild/Downloads/VCell_{Rel,Alpha}_macos_7_2_0_{buildnum}_64bit.dmg
// Remove unnotarized app from vcell-node1
     ssh vcell@vcell-node1 rm /opt/build/vcell/docker/swarm/generated_installers/VCell_{Rel,Alpha}_macos_7_2_0_{buildnum}_64bit.dmg
//Copy Notarized software back to deploy server (vcell-node1) and continue deployment
     scp /Users/vcellbuild/Downloads/VCell_{Rel,Alpha}_macos_7_2_0_{buildnum}_64bit.dmg vcell@vcell-node1:/opt/build/vcell/docker/swarm/generated_installers/VCell_{Rel,Alpha}_macos_7_2_0_{buildnum}_64bit.dmg
// Change ownership  of file to root:root on vcell-node1 manually
// change name of file from "VCell_{Rel,Alpha}_windows-x32_7_2_0_{buildnum}_32bit.exe" "VCell_{Rel,Alpha}_windows_7_2_0_{buildnum}_32bit.exe" on vcell-node1 manually (remove -x32)
./deploy.sh \
   --ssh-user vcell --ssh-key ~/.ssh/id_rsa --install-singularity --build-installers --installer-deploy-dir $VCELL_INSTALLER_REMOTE_DIR --link-installers \
   ${MANAGER_NODE} \
   ./${VCELL_CONFIG_FILE_NAME} /usr/local/deploy/config/${VCELL_CONFIG_FILE_NAME} \
   ./docker-compose.yml        /usr/local/deploy/config/docker-compose_${VCELL_TAG}.yml \
   vcell${VCELL_SITE}

SERVER only deploy commands (may request password at some point)

./deploy.sh \
   --ssh-user vcell --ssh-key ~/.ssh/id_rsa --install-singularity  \
   ${MANAGER_NODE} \
   ./${VCELL_CONFIG_FILE_NAME} /usr/local/deploy/config/${VCELL_CONFIG_FILE_NAME} \
   ./docker-compose.yml        /usr/local/deploy/config/docker-compose_${VCELL_TAG}.yml \
   vcell${VCELL_SITE}

[start bash on compute node] IGNORE srun --account=pi-loew --nodes=1 --ntasks-per-node=1 --qos=vcell --partition=vcell --time=01:00:00 --pty bash -i

Info Local Service Debugging

/usr/local/deploy/config sudo $(cat ${VCELL_CONFIG_FILE_NAME} | xargs) docker stack deploy -c docker-compose_${VCELL_TAG}.yml vcellalpha