Skip to content

Minio storage

Loïc Sarrazin edited this page Dec 2, 2021 · 13 revisions

What is Minio?

Minio is "a high performance distributed object storage server, designed for large-scale private cloud infrastructure. Minio is widely deployed across the world with over 104.4M+ docker pulls."

Basically, it allows us to create our own storage servers mocking AWS S3 storage.

Pre-reqs

Install docker:

    curl https://get.docker.com | sudo sh
    sudo usermod -aG docker $USER
    # reconnect

Open port 80 and optionally 443 for SSL

Minio running

Basic Minio service without SSL

This is recommended for users who wish to host a local minio as their storage, for testing purposes, or for a locally hosted platform.

First create a docker-compose file dedicated to the minio service.
To do so, create a file called docker-compose-minio.yml with the following content

version: '2'
services:
  minio:
    image: minio/minio
    environment:
      MINIO_ROOT_USER: 1234567890
      MINIO_ROOT_PASSWORD: 1234567890
    ports: ["9000:9000"]
    command: server --address ":9000" /data
    volumes:
      - ~/data:/data
      - ~/minio_config:/root/.minio
    container_name: minio

You can now start the service with

docker-compose -f docker-compose-minio.yml up -d 

doing so will pull the image and start the service. Note that here minio will be called both from the browser and from within the containers. So on Windows for example, accessing the service from within a container will require that a host must configurated so that the container points towards the localhost. To do so, edit your "etc/hosts" file and add the following line :

127.0.0.1 minio

with minio here refering to the container name.

Minio service with SSL [Outdated and incomplete]

Make sure your SSL certs are named public.crt and private.key, then place them in the ~/minio_config/certs directory.

docker run -p 443:443 \
  -e "MINIO_ACCESS_KEY=1234567890" \
  -e "MINIO_SECRET_KEY=1234567890" \
  -v ~/data:/data \
  -v ~/minio_config:/root/.minio \
  -d \
  minio/minio server --address ":443" /data

More information on the SSL configuration and certificates generation

Configurating the Minio with regards to Codalab [In Construction]

Mandatory steps regardless of if the installation is local or not.

  1. create private bucket, I named mine "private"
  2. create public bucket, I named mine "public"
  3. public bucket gets "read" perms
  4. Modify your codalab-competition's .env to use the settings:
    # Minio
    DEFAULT_FILE_STORAGE=storages.backends.s3boto3.S3Boto3Storage
    AWS_ACCESS_KEY_ID=1234567890
    AWS_SECRET_ACCESS_KEY=1234567890
    AWS_STORAGE_BUCKET_NAME=public
    AWS_STORAGE_PRIVATE_BUCKET_NAME=private
    AWS_S3_CALLING_FORMAT=boto.s3.connection.OrdinaryCallingFormat
    AWS_S3_HOST=minio:9000 # Minio host, without http/https. I
    AWS_QUERYSTRING_AUTH=False
    S3DIRECT_REGION=us-east-1
    S3_USE_SIGV4=True
    AWS_S3_ENDPOINT_URL=http://minio:9000 # Minio host, with http/https
    AWS_S3_REGION_NAME=us-east-1 # Your region name
    AWS_S3_SIGNATURE_VERSION=s3v4
    AWS_S3_ADDRESSING_STYLE=path
    AWS_S3_USE_SSL=False # Beware if SSL is used or not in your configuration
    S3DIRECT_URL_STRUCTURE=http://{0}/{1} # Beware to use https if SSL is configurated.

Setting policies using minio's "mc" helper [In Construction]:

We're making the minio docker container and getting a shell into it using Minio Client (mc)

$ docker run -it --entrypoint=/bin/sh minio/mc

In the docker container, create some envrionment variable:

bash
export AWS_S3_ENDPOINT_URL=http://172.17.0.1 # Might not be necessary since this is required in the .env with minio
export MINIO_SECRET_KEY=1234567890 # Not be necessary if the minio is created with a dedicated docker-compose file.
export AWS_STORAGE_BUCKET_NAME=public # Same as both points above
export AWS_STORAGE_PRIVATE_BUCKET_NAME=private # Same as both points above

Using the minio client, configurate the buckets policies

/usr/bin/mc config host add minio_docker $AWS_S3_ENDPOINT_URL $MINIO_ACCESS_KEY $MINIO_SECRET_KEY; # Only necessary with SSL
/usr/bin/mc mb minio_docker/$AWS_STORAGE_BUCKET_NAME; 
/usr/bin/mc mb minio_docker/$AWS_STORAGE_PRIVATE_BUCKET_NAME;
/usr/bin/mc policy set download minio_docker/$AWS_STORAGE_BUCKET_NAME;

SSL : complementary information [Incomplete]

Setup (Certbot)[https://certbot.eff.org/]

    sudo apt-get update
    sudo apt-get install software-properties-common
    sudo add-apt-repository ppa:certbot/certbot
    sudo apt-get update
    sudo apt-get install certbot 

And here are the official Minio SSL install docs

Add DNS TXT record

When you run it, it'll say where to add it and what to put in the TXT record

Run it

    sudo certbot certonly --manual --preferred-challenges dns -d minio.YOURHOST.com --staple-ocsp -m [email protected] --agree-tos

Move + rename files appropriately, they are symlinks so do it like so...

    sudo cp -Lr /etc/letsencrypt/live/minio.YOURHOST.com-0001/fullchain.pem public.crt
    sudo cp -Lr /etc/letsencrypt/live/minio.YOURHOST.com-0001/privkey.pem private.key

    ├── certs
    │   ├── CAs
    │   ├── private.key
    │   └── public.crt
    └── config.json

Renew it

    sudo certbot renew

Mount + format drive

If you need to use a large drive, these commands may be useful:

    # Format drive
    sudo mkfs.ext4 /dev/vdb

    # Mount it on boot, by adding this to /etc/fstab
    /dev/vdb        /mnt/data       ext4    defaults        0 0

    # Do this to make mount look for newly added drives
    sudo mount -a

Copy from * to Minio

Install rclone https://rclone.org/install/

    $ curl https://rclone.org/install.sh | sudo bash
    $ rclone config # add the servers

    # If you need to add more servers, `rclone config` again

    # This next command does the copying from your original server
    # to the new minio server, it may take a while!
    $ rclone copy <server name>:<bucket name> /your/path/to/storage

    # Then if you want to sync to a remote server, you swap source and destination
    $ rclone copy /your/path/to/storage  <server name>:<bucket name>

Backup to remote source cron job script

Edit the variables in the following script, naming it backup_storage.sh

#!/bin/bash
# Local paths to public and private storage
PUBLIC_STORAGE=/data/public
PRIVATE_STORAGE=/data/private

# The storage name you setup and the buckets for public and private storage
REMOTE_PUBLIC_STORAGE=azure-storage:public
REMOTE_PRIVATE_STORAGE=azure-storage:private

rclone copy $PUBLIC_STORAGE $REMOTE_PUBLIC_STORAGE
rclone copy $PRIVATE_STORAGE $REMOTE_PRIVATE_STORAGE

Add this to your @daily or @weekly cron jobs:

# Open cron
crontab -e

# Add this line
@daily /path/to/backup_storage.sh

Clone this wiki locally