Table of Contents
Deveops: Docker Tutorial
Basic
1. Install Docker
1.1 Fedora
$ sudo dnf install docker
To start the Docker service use:
$ sudo systemctl start docker
Now you can verify that Docker was correctly installed and is running by running the Docker hello-world image.
$ sudo docker run hello-world
Start the Docker daemon at boot To make Docker start when you boot your system, use the command:
$ sudo systemctl enable docker
Why can’t I use docker command as a non root user, by default? The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can access it with sudo. For this reason, Docker daemon always runs as the root user.
You can either set up sudo to give docker access to non-root users.
Or you can create a Unix group called docker and add users to it. When the Docker daemon starts, it makes the ownership of the Unix socket read/writable by the docker group.
Warning: The docker group is equivalent to the root user; For details on how this impacts security in your system, see Docker Daemon Attack Surface for details.
To create the docker group and add your user:
$ sudo groupadd docker && sudo gpasswd -a ${USER} docker && sudo systemctl restart docker
$ newgrp docker
2. Quickstart
On docker machine (VM, host), create workspace:
git clone https://thuydang@bitbucket.org/thuydang/anthenao.git
Check docker image, build / checkout from repo etc.
Run docker-compose all:
docker-compose -f ./eco/backend/docker-compose.yml up # run single container docker-compose -f ./eco/backend/docker-compose.yml up [web, db]
Run container bash:
docker-compose -f ./eco/backend/docker-compose.yml exec web bash
3. Docker image location
4. Docker Volumes
Data can be persisted using volume, bind-mount, and tmpfs mount.
Using volume has advantages.
Declare volume using -v field1:field2:field3. The fields must be in the correct order, and the meaning of each field is not immediately obvious:
- In the case of named volumes, the first field is the name of the volume, and is unique on a given host machine. For anonymous volumes, the first field is omitted.
- The second field is the path where the file or directory are mounted in the container.
- The third field is optional, and is a comma-separated list of options, such as ro. These options are discussed below.
4.1 Create and manage volumes
Unlike a bind mount, you can create and manage volumes outside the scope of any container.
Create a volume:
docker volume create my-vol
List volumes:
docker volume ls
local my-vol
Inspect a volume:
$ docker volume inspect my-vol
[
{
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]
Remove a volume:
docker volume rm my-vol
4.2 Start a container with a volume
If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following example mounts the volume myvol2 into /app/ in the container.
The -v and –mount examples below produce the same result. You can’t run them both unless you remove the devtest container and the myvol2 volume after running the first one.
docker run -d \ --name devtest \ -v myvol2:/app \ nginx:latest
Use docker inspect devtest to verify that the volume was created and mounted correctly. Look for the Mounts section:
"Mounts": [
{
"Type": "volume",
"Name": "myvol2",
"Source": "/var/lib/docker/volumes/myvol2/_data",
"Destination": "/app",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
Stop the container and remove the volume. Note volume removal is a separate step.
docker container stop devtest docker container rm devtest docker volume rm myvol2
4.3 Volume with docker-compose
4.3.1 Docker volume basic
There are four possible options to mount any volume:
- Relative Path
- Absolute Path
- Docker Volume Default Path
- Docker Volume with Absolute Path
Here is the example for above:
version: '3'
services:
sample:
image: sample
volumes:
- ./relative-path-volume:/var/data-two
- /home/ubuntu/absolute-path-volume:/var/data-one
- docker-volume-default-path-volume:/var/data-three
- docker-volume-absolute-path-volume:/var/data-four
volumes:
docker-volume-default-path-volume: {}
docker-volume-absolute-path-volume:
driver: local
driver_opts:
o: bind
type: none
device: /home/path/of/your/folder
Relative Path: ./relative-path-volume:/var/data-two
Absolute Path: /home/ubuntu/absolute-path-volume:/var/data-one
Docker Volume Default Path: docker-volume-default-path-volume:/var/data-three
Docker Volume with Absolute Path: docker-volume-absolute-path-volume:/var/data-four
This works for any server as we customize the volume device property to the respective directory path.
4.3.2 Declare volume with service
Mount host paths or named volumes, specified as sub-options to a service.
You can mount a host path as part of a definition for a single service, and there is no need to define it in the top level volumes key.
But, if you want to reuse a volume across multiple services, then define a named volume in the top-level volumes key. Use named volumes with services, swarms, and stack files.
This example shows a named volume (mydata) being used by the web service, and a bind mount defined for a single service (first path under db service volumes). The db service also uses a named volume called dbdata (second path under db service volumes), but defines it using the old string format for mounting a named volume. Named volumes must be listed under the top-level volumes key, as shown.
version: "3.8"
services:
web:
image: nginx:alpine
volumes: --> long syntax
- type: volume
source: mydata
target: /data
volume:
nocopy: true
- type: bind
source: ./static
target: /opt/app/static
db:
image: postgres:latest
volumes: --> short syntax source:target:mode
- "/var/run/postgres/postgres.sock:/var/run/postgres/postgres.sock" --> bind mount
- "dbdata:/var/lib/postgresql/data" --> named volume
volumes:
mydata:
dbdata:
4.3.3 SHORT SYNTAX
The short syntax uses the generic [SOURCE:]TARGET[:MODE] format, where SOURCE can be either a host path or volume name. TARGET is the container path where the volume is mounted. Standard modes are ro for read-only and rw for read-write (default).
You can mount a relative path on the host, which expands relative to the directory of the Compose configuration file being used. Relative paths should always begin with . or …
volumes: # Just specify a path and let the Engine create a volume - /var/lib/mysql # Specify an absolute path mapping - /opt/data:/var/lib/mysql # Path on the host, relative to the Compose file - ./cache:/tmp/cache # User-relative path - ~/configs:/etc/configs/:ro # Named volume - datavolume:/var/lib/mysql
5. docker history
Description
Show the history of an image
Usage
docker history [OPTIONS] IMAGE
Options
Name, shorthand Default Description
–human, -H true Print sizes and dates in human readable format
–no-trunc false Don’t truncate output
–quiet, -q false Only show numeric IDs
6. Docker Hub Upload
Save login info
docker login #user: vsrccom #pwd: 1.2.
#user: thuydang #pwd: 1.2.
Credential is saved under $HOME/.docker/config.json
6.1 Commit image
6.2 Tag image with registry host
You need to tag your image correctly first with your registryhost:
docker tag [OPTIONS] IMAGE[:TAG] [REGISTRYHOST/][USERNAME/]NAME[:TAG]
Then docker push using that same tag.
docker push NAME[:TAG]
Example:
docker images --digests REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE anthenao_img_web_final latest <none> 8c07e68c80af 12 days ago 706 MB
docker tag image_name_or_id docker.io/repo_name docker push docker.io/repo_name
so, with this image: 518a41981a6a, what is the actual tag syntax to make it go to the me-private repo?
docker tag 518a41981a6a me-private.com/myPrivateImage && docker push me-private.com/myPrivateImage
6.3 Upload image
docker push yourname/newimage docker push vsrccom/newimage
Docker User Guide
7. Build app image
(see s2i section)
7.1 Dev with dev container
Quickstart container to dev app
Create image with vim, etc
Run as root with source folder on host mounted:
sudo docker run -it -u root -v $(pwd)/anthenao:/opt/app-root/src -p 8000:8080 anthenao_dev_img bash
Install packages, make changes
pip install requirements/development.txt
Run Server avoid binding to 127.0.0.1, can not access from outside container:
python manage.py runserver 0.0.0.0:8080
Commit Changes
[dang@dai142 anthenao_web]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 549ee7dc07fd anthenao_dev_img "container-entrypoint" 2 minutes ago Up 2 minutes 8080/tcp dreamy_ptolemy [dang@dai142 anthenao_web]$ sudo commit 549ee7dc07fd anthenao/ Dockerfile Dockerfile.s2i.base Makefile runtime.txt AUTHORS.rst Dockerfile.development LICENSE.rst Procfile s2i/ contrib/ Dockerfile.production logs/ README.rst test/ [dang@dai142 anthenao_web]$ sudo docker commit 549ee7dc07fd anthenao_dev_img sha256:e66985cfd36b2bf7ca14f566b5a33708e58d5f290e0f469db0bf4ff78eacc797 [dang@dai142 anthenao_web]$
8. Build database image
8.1 Dockerfile
FROM postgres:9.4 COPY setup.sh /docker-entrypoint-initdb.d/
This extends the base image by adding a setup script (which is run after Postgres starts up) which will create our database and a user:
#!/usr/bin/env bash createuser -U postgres --createdb --createrole dbuser; createdb -U dbuser anthenaodb;
Again if we build and run this we should be able to see we have our database ready using psql.
docker build -f dockerfile -t anthenao_db:latest . <---- A DOT in the end docker run -d anthenao_db:latest
Build with docker-compose:
docker-compose build db
8.2 Test
docker run -it --net=anthenao_bridge -v anthenao_db_data:/var/lib/postgresql/data anthenao_img_postgres:9.5 docker run -it --net=anthenao_bridge -v /path/on/host:/var/lib/postgresql/data anthenao_img_postgres:9.5 docker exec -ti -u root container_name bash docker ps -a docker exec -it mad_goldstine bash psql -U dbuser -d anthenaodb root@f08c71411a8d:/# gosu postgres psql psql (9.5.5)
8.3 Run with docker-compose
docker-compose -f ../eco/backend/docker-compose.yml run --service-ports web bash
8.4 More Test
docker run -it --net=anthenao_bridge -v anthenao_db_data:/var/lib/postgresql/data anthenao_img_postgres:9.5 docker run -it --rm --name pg1 -e POSTGRES_PASSWORD=12345 -e POSTGRES_USER=bob postgres:9.4 docker run -it --link pg1:postgres --rm postgres:9.4 sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U bob' $ docker run -it --rm --name pg1 -p 5432:5432 -e POSTGRES_PASSWORD=12345 -e POSTGRES_USER=bob postgres:9.4 $ psql -h dockerhost -U bob
9. Persisting Data
Create docker volume:
docker volume create --name=dbdata
Then run our db container and bind mount the /var/lib/postgres/data to our dbdata volume.
docker run -d -v dbdata:/var/lib/postgresql/data anthenao_db:latest
10. Linking Containers
Firstly we can run our db container with the name “db” which Docker uses to identify containers when linking:
docker run -d –name db -v dbdata:/var/lib/postgresql/data 279
Then run the api container and link to “db”:
docker run -it -p 8000:8000 -v $(pwd):/var/www/app –link db d8e bash
Now if we enter the API container and run ping db we'll see the responses from the IP address assigned to the db container.
Let's update our Django settings to point to “db” (remember the files are mounted so the changes are immediately visible):
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'films',
'USER': 'dbuser',
'PASSWORD': '',
'HOST': 'db',
'PORT': '5432',
}
} When we run python manage.py migrate we should see it update our postgres db.
And just to emphasise the point - destroy/rerun the containers and you should see there are no migrations to run because of the persisted dbdata volume.
11. Using Networking instead of Link
docker network create backend docker network ls docker run -d --name db --net=backend -v dbdata:/var/lib/postgresql/data 279
And similarly with the api container (Note that we cannot use –link anymore):
docker run -it --net=backend -p 8000:8000 -v $(pwd):/var/www/app d8e bash docker network inspect backend
shows us that our containers are now attached to the “backend” network:
[
{
"Name": "backend",
"Id": "bed3b16fd4e1cd6ae4d58340cea426032052204d0c3d3666cb23b758ab92187c",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{}
]
},
"Containers": {
"29be0d63bb2bf069c2f51879947f075293421629bf1865024326f7368ca9ac39": {
"EndpointID": "c168933c2ae4ba31c724d3b4194deb007d01ecd3d1116f97ca7ff2d9a547599f",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"b16b51597189c61deeb167ad71a21979b0e90d9c9a8435028234c2dbec94825b": {
"EndpointID": "9979281e8a30e863cbbf0069bc3bd9bebd20603de08ca779b331cf83bba4009d",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {}
}
]
With this done, ping db and the migrate scripts should all work as before.
12. Clean UP
docker rm -v $(docker ps -a -q -f status=exited) docker rmi $(docker images -f "dangling=true" -q) docker volume rm $(docker volume ls -qf dangling=true) docker-compose -f docker-compose.yml rm
13. Exposing ports
Exposing incoming ports through the host container is fiddly but doable.
This is done by mapping the container port to the host port (only using localhost interface) using -p:
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage
'Docker-compose run' needs tag to force expose, stop/rm command can be used to remove old instances:
docker-compose run --service-ports nodejsserver
You can tell Docker that the container listens on the specified network ports at runtime by using EXPOSE:
EXPOSE <CONTAINERPORT>
Note that EXPOSE does not expose the port itself – only -p will do that. To expose the container's port on your localhost's port:
iptables -t nat -A DOCKER -p tcp --dport <LOCALHOSTPORT> -j DNAT --to-destination <CONTAINERIP>:<PORT>
If you're running Docker in Virtualbox, you then need to forward the port there as well, using forwardedport. Define a range of ports in your Vagrantfile like this so you can dynamically map them: <code> Vagrant.configure(VAGRANTFILEAPI_VERSION) do |config|
...
(49000..49900).each do |port| config.vm.network :forwarded_port, :host => port, :guest => port end
...
end </code> If you forget what you mapped the port to on the host container, use docker port to show it:
docker port CONTAINER $CONTAINERPORT
Docker-compose
Multiple isolated environments on a single host
Compose uses a project name to isolate environments from each other. You can make use of this project name in several different contexts:
- on a dev host, to create multiple copies of a single environment, such as when you want to run a stable copy for each feature branch of a project
- on a CI server, to keep builds from interfering with each other, you can set the project name to a unique build number
- on a shared host or dev host, to prevent different projects, which may use the same service names, from interfering with each other.
The default project name is the basename of the project directory. You can set a custom project name by using the -p command line option or the COMPOSEPROJECTNAME environment variable.
Preserve volume data when containers are created
Compose preserves all volumes used by your services. When docker-compose up runs, if it finds any containers from previous runs, it copies the volumes from the old container to the new container. This process ensures that any data you’ve created in volumes isn’t lost.
If you use docker-compose on a Windows machine, see Environment variables and adjust the necessary environment variables for your specific needs.
Only recreate containers that have changed
Compose caches the configuration used to create a container. When you restart a service that has not changed, Compose re-uses the existing containers. Re-using containers means that you can make changes to your environment very quickly.
Variables and moving a composition between environments
Compose supports variables in the Compose file. You can use these variables to customize your composition for different environments, or different users. See Variable substitution for more details.
You can extend a Compose file using the extends field or by creating multiple Compose files. See extends for more details.
Recreate certain container
if you want to run and recreate a specific service inside your docker-compose file you can do it the same way as @dnephin proposed, like
docker-compose up -d --force-recreate --no-deps --build service_name
Recreate volume
Docker will reuse volumes. To clean them:
docker-compose down -v -f
before the up command.
Recreate anonymous volumes instead of retrieving data from the previous containers. -V, –renew-anon-volumes.
docker-compose up --renew-anon-volumes
Remove named volumes
docker volume rm VOLUME_NAME
0.1 Compose file
1. Env file
You can verify this with the config command, which prints your resolved application config to the terminal:
docker-compose config
version: '3'
services:
web:
image: 'webapp:v1.5'
1.1 Headline
Our database container is no longer available at db. Instead with the new networking api it's alias is set to <network><app><index> or backenddb1. So we need to set our host to this in Django settings:
'HOST': 'backend_db_1',
Then run:
docker-compose --x-networking up
And everything should start up on a new network called “backend”.
Note: –x-networking is the experimental flag which enables the new networking api to be available from docker-compose
We can simplify this further for development by adding a bash script called runInDevMode with the following:
#!/usr/bin/env bash # create the dbdata volume docker volume create --name=dbdata # start the applications on the backend network cd ./eco/backend/ && docker-compose --x-networking up
Producing Public Docker Image
Summary of steps:
- Create local image
- Download and Install additional software to base image: Dockerfile.s2i.base
- Create s2i workspace for: Dockerfile, src folder in the image (/opt/app-root).
- Modify Dockerfile, install packages, update s2i files, etc.
- Build local image: docker build s2ibaseimg
- Result: s2ibaseimg
- Run s2i to push source code to the base image and produce local image.
- s2i build sourcefolder s2ibaseimg myimage - (Optional) Rerun S2I to add source code to image with scripts: s2i build … - (Optional) other methods: commit changes to of running image to the local image. - Development Dockerfile / Compose.yml - Upload image - Tag local image: docker tag … - docker tag imageid dockerhubid/repoimagename:version - Upload: docker push …* - docker push dockerhubid/repoimage_name - Production Dockerfile / Compose.yml
1. Images workflow
1.1 S2I Project
Create 1st image: Stock image –docker build Dockerfile.s2i–> image:s2i –s2i build–> image:latest Development: image:latest –docker build Dockerfile.dev–> image:dev –> run + edit code docker build -t vfossimgweb:dev -f Dockerfile.development . docker run -it -u root -v $(pwd)/vfossorg:/opt/app-root/src -v $(pwd)/logs:/opt/app-root/logs -p 8000:8080 vfossimg_web:dev bash Produce release image: image:s2i –s2i build–> image:release Commit public latest:
1.2 Docker Deployment
Production local –compose local image–> run Production repo –compose repo image–> run
2. Create local image
2.1 Create image for s2i
docker build -t anthenaoimgweb:s2i -f Dockerfile.s2i.base . # the dot (.) === Dockerfile.s2i.base ===
<code> #file: Dockerfile.s2i.base # This image provides a Python 2.7 environment you can use to run your Python # applications. ### 2 Steps # 1. Make (pre)build image: docker build -t anthenaoimgwebbuild -f Dockerfile.s2i.base . | or make build (Makefile) # 2. build s2i command # s2i build –loglevel=4 anthenaosrc anthenaoimgwebbuild anthenaoimgwebfinal # s2i build –loglevel=4 anthenao anthenaoimgwebbuild anthenaoimgwebfinal # change src file and run more s2i build to create final image. # 3. Commit anthenaoimgweb_final # 4. Use uploaded image for production with production Dockerfile #FROM anthenaowebbase FROM centos/s2i-base-centos7 #FROM centos/python-27-centos7 MAINTAINER SoftwareCollections.org sclorg@redhat.com EXPOSE 8080 ENV PYTHON_VERSION=2.7 \ PATH=$HOME/.local/bin/:$PATH LABEL io.k8s.description=“Platform for building and running Python 2.7 applications” \ io.k8s.display-name=“Python 2.7” \ io.openshift.expose-services=“8080:http” \ io.openshift.tags=“builder,python,python27,rh-python27” USER root RUN yum install -y centos-release-scl && \ INSTALLPKGS=“libjpeg-turbo libjpeg-turbo-devel python27 python27-python-devel python27-python-setuptools python27-python-pip nsswrapper httpd httpd-devel atlas-devel gcc-gfortran gettext postgresql-libs nmap-ncat ” && \ yum install -y –setopt=tsflags=nodocs –enablerepo=centosplus $INSTALLPKGS && \ rpm -V $INSTALLPKGS && \ # Some additional packages #pip install virtualenvwapper && \ #echo “source /usr/local/bin/virtualenvwrapper.sh” » ~/.bashrc && \ # Remove centos-logos (httpd dependency, ~20MB of graphics) to keep image # size smaller. #rpm -e –nodeps centos-logos && \ yum clean all -y # Each language image can have 'contrib' a directory with extra files needed to # run and build the applications. COPY ./contrib/ /opt/app-root # Copy the S2I scripts from the specific language image to $STISCRIPTSPATH. #COPY ./s2i/bin/ $STISCRIPTSPATH #USER root #COPY ./s2i/bin/* /usr/libexec/s2i/ LABEL io.openshift.s2i.scripts-url=image:/usr/libexec/s2i COPY ./anthenao/.s2i/bin/ /usr/libexec/s2i #RUN rm -rf /opt/app-root/* # Copy the S2I scripts from the specific language image to other location and update. #USER 1001 #RUN mkdir -p /opt/app-root/s2i/bin #COPY ./s2i/bin /opt/app-root/s2i/ #LABEL io.openshift.s2i.scripts-url=image:/opt/app-root/s2i/bin # App specifics #RUN mkdir -p /opt/app-root/logs # In order to drop the root user, we have to make some directories world # writable as OpenShift default security model is to run the container under # random UID. RUN chown -R 1001:0 /opt/app-root && chmod -R ug+rwx /opt/app-root USER 1001 # Set the default CMD to print the usage of the language image. CMD /usr/libexec/s21/usage </code>2.2 Add Source Code with S2I
s2i build –loglevel=4 anthenao anthenaoimgweb:s2i anthenaoimgweb:release change src file and run more s2i build to create final image. See next big section.
2.3 Update Source Code using previous releaes
s2i build –loglevel=4 anthenao anthenaoimgweb:latest anthenaoimgweb:release
2.4 Developing App
How to dev app with docker?
3. Upload Image
* Commit image from running container: https://docs.docker.com/engine/reference/commandline/commit/ * Changing container and commit image. It is better to make changes with Dockerfile. * The –change option will apply Dockerfile instructions to the image that is created. Supported Dockerfile instructions: CMD|ENTRYPOINT|ENV|EXPOSE|LABEL|ONBUILD|USER|VOLUME|WORKDIR * docker commit –change='CMD [“apachectl”, “-DFOREGROUND”]' -c “EXPOSE 80” c3f279d17e0a svendowideit/testimage:version4 f5283438590d * https://docs.docker.com/engine/tutorials/dockerrepos/ * tag, push, pull: https://docs.docker.com/engine/getstarted/step_six/ Uploading image to docker.io/vsrccom/anthenaoimgweb. Local image: docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker-whale latest 7d9495d03763 38 minutes ago 273.7 MB
Tag image: https://docs.docker.com/engine/getstarted/tutimg/tagger.png Tag not needed for pulled image, can be used to update version.
docker tag 7d9495d03763 maryatdocker/docker-whale:latest docker pull vsrccom/anthenaoimgweb After tagging docker images REPOSITORY TAG IMAGE ID CREATED SIZE maryatdocker/docker-whale latest 7d9495d03763 5 minutes ago 273.7 MB Login docker login Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one. Username: Password: Login Succeeded
docker push maryatdocker/docker-whale The push refers to a repository [maryatdocker/docker-whale] (len: 1) ===== Production Docker-compose.yml ===== === docker-compose.yml === <code yml> version: “2” services: web:
containername: web #image: anthenaoimgweb:production image: vsrccom/anthenaoimgweb ←———- public image # Disable build for production #build: # context: ../../anthenaoweb # dockerfile: Dockerfile.production # #args: # # buildno: 1 networks: - anthenaobridge ports: #- “host–>container” - “8000:8080” #volumes: # - ./api:/var/www/app envfile: ../../anthenaoweb/anthenao/.s2i/environment #tty: true dependson: - db #command: /var/www/app/start.sh db:
containername: db image: anthenaoimgpostgres:9.5 build: ../../anthenaodb volumes: # - /path/on/host:/path/in/container # - volumename:/path/in/container - “anthenaodbdata:/var/lib/postgresql/data” networks: - anthenaobridge ports: - “5432:5432” privileged: true #tty: true # DBVARS in setup.sh entry point #envfile: .env #environment: # - POSTGRESUSER=dbuser # - POSTGRESPASSWORD= # - POSTGRES_DB=anthenaodb volumes: anthenaodbdata: #must declare volumes networks: anthenao_bridge: #driver: overlay ## usage: docker-compose up, # ## before: docker-compose build (svc_name) # Compose build only runs Dockerfile. Image with s2i can not be built with compose. # Compose can be use to run (up) container, however. Note set image-name to s2i built image. </code> ====== S2I ====== * Follow this: * https://zwischenzugs.wordpress.com/2015/07/04/redhats-docker-build-method-s2i/ * https://blog.openshift.com/create-s2i-builder-image/ * https://github.com/openshift/source-to-image/blob/master/docs/builder_image.md * https://blog.openshift.com/override-s2i-builder-scripts/ ===== Install s2i ===== wget https://github.com/openshift/source-to-image/releases tar xvzf cp s2i /usr/local/bin
===== Create build dir ===== s2i create BASEIMG BUILDDIR s2i create centos/python-27-centos7 my-image
===== Appdir ===== Place source code of the app in the generated folder. <code> anthenao/ ←– generated by cookiecutter-django-cms ├── anthenao │ ├── init.py │ ├── settings │ │ ├── base.py │ │ ├── dev.py │ │ ├── init.py │ │ └── production.py │ ├── urls.py │ └── wsgi.py ├── core │ ├── contextprocessor.py │ ├── fixtures │ │ └── initialdata.json │ ├── init.py │ ├── models.py │ ├── tests.py │ └── views.py ├── manage.py ←—- Use by s2i assemble script ├── requirements
│ ├── base.txt │ ├── development.txt │ └── production.txt ├── requirements.txt ←—- Use by s2i assemble script ├── .s2i ←—- s2i scripts │ ├── bin │ │ ├── assemble ←– called when building image │ │ ├── run ←– called when container start │ │ ├── save-artifacts │ │ └── usage │ └── environment ←– ENV vars in key=value ├── static │ ├── css │ │ └── main.css │ └── script.js └── templates ├── base.html ├── menu.html └── singlepage.html </code> ===== Dockerfile ===== <code> # This image provides a Python 2.7 environment you can use to run your Python # applications. FROM centos/s2i-base-centos7 #FROM centos/python-27-centos7 MAINTAINER SoftwareCollections.org sclorg@redhat.com EXPOSE 8080 ENV PYTHON_VERSION=2.7 \ PATH=$HOME/.local/bin/:$PATH LABEL io.k8s.description=“Platform for building and running Python 2.7 applications” \ io.k8s.display-name=“Python 2.7” \ io.openshift.expose-services=“8080:http” \ io.openshift.tags=“builder,python,python27,rh-python27” RUN yum install -y centos-release-scl && \ INSTALLPKGS=“libjpeg-turbo libjpeg-turbo-devel python27 python27-python-devel python27-python-setuptools python27-python-pip nsswrapper httpd httpd-devel atlas-devel gcc-gfortran postgresql-client libpq-devel netcat ” && \ yum install -y –setopt=tsflags=nodocs –enablerepo=centosplus $INSTALLPKGS && \ rpm -V $INSTALLPKGS && \ # Some additional packages #pip install virtualenvwapper && \ #echo “source /usr/local/bin/virtualenvwrapper.sh” » ~/.bashrc && \ # Remove centos-logos (httpd dependency, ~20MB of graphics) to keep image # size smaller. rpm -e –nodeps centos-logos && \ yum clean all -y # Copy the S2I scripts from the specific language image to $STISCRIPTSPATH. #COPY ./s2i/bin/ $STISCRIPTSPATH #USER root #COPY ./s2i/bin/* /usr/libexec/s2i/ #LABEL io.openshift.s2i.scripts-url=image:/usr/libexec/s2i COPY ./s2i/bin/ /usr/libexec/s2i RUN rm -rf /opt/app-root/* # Copy the S2I scripts from the specific language image to other location and update. #USER 1001 #RUN mkdir -p /opt/app-root/s2i/bin #COPY ./s2i/bin /opt/app-root/s2i/ #LABEL io.openshift.s2i.scripts-url=image:/opt/app-root/s2i/bin # Each language image can have 'contrib' a directory with extra files needed to # run and build the applications. COPY ./contrib/ /opt/app-root # In order to drop the root user, we have to make some directories world # writable as OpenShift default security model is to run the container under # random UID. RUN chown -R 1001:0 /opt/app-root && chmod -R ug+rwx /opt/app-root USER 1001 # Set the default CMD to print the usage of the language image. CMD $STISCRIPTSPATH/usage </code> ERROR: .s2i scripts are not copied to correct location /usr/local/s2i. FIX place scripts in app_dir/.s2i/bin ===== S2I assemble ===== ===== S2I run ===== ===== S2I environment ===== To set these environment variables, you can place them as a key value pair into a .s2i/environment file inside your source code repository. APP_FILE Used to run the application from a Python script. This should be a path to a Python file (defaults to app.py) that will be passed to the Python interpreter to start the application. APP_MODULE Used to run the application with Gunicorn, as documented here. This variable specifies a WSGI callable with the pattern MODULENAME:VARIABLENAME, where MODULENAME is the full dotted path of a module, and VARIABLENAME refers to a WSGI callable inside the specified module. Gunicorn will look for a WSGI callable named application if not specified. If APP_MODULE is not provided, the run script will look for a wsgi.py file in your project and use it if it exists. If using setup.py for installing the application, the MODULE_NAME part can be read from there. For an example, see setup-test-app. APP_HOME This variable can be used to specify a sub-directory in which the application to be run is contained. The directory pointed to by this variable needs to contain wsgi.py (for Gunicorn) or manage.py (for Django). If APP_HOME is not provided, the assemble and run scripts will use the application's root directory. APP_CONFIG Path to a valid Python file with a Gunicorn configuration file. DISABLE_COLLECTSTATIC Set this variable to a non-empty value to inhibit the execution of 'manage.py collectstatic' during the build. This only affects Django projects. DISABLE_MIGRATE Set this variable to a non-empty value to inhibit the execution of 'manage.py migrate' when the produced image is run. This only affects Django projects. PIPINDEXURL Set this variable to use a custom index URL or mirror to download required packages during build process. This only affects packages listed in requirements.txt. WEB_CONCURRENCY Set this to change the default setting for the number of workers. By default, this is set to the number of available cores times 4. Source repository layout You do not need to change anything in your existing Python project's repository. However, if these files exist they will affect the behavior of the build process: requirements.txt List of dependencies to be installed with pip. The format is documented here. setup.py Configures various aspects of the project, including installation of dependencies, as documented here. For most projects, it is sufficient to simply use requirements.txt. Run strategies The Docker image produced by s2i-python executes your project in one of the following ways, in precedence order: Gunicorn The Gunicorn WSGI HTTP server is used to serve your application in the case that it is installed. It can be installed by listing it either in the requirements.txt file or in the install_requires section of the setup.py file. If a file named wsgi.py is present in your repository, it will be used as the entry point to your application. This can be overridden with the environment variable APP_MODULE. This file is present in Django projects by default. If you have both Django and Gunicorn in your requirements, your Django project will automatically be served using Gunicorn. Django development server If you have Django in your requirements but don't have Gunicorn, then your application will be served using Django's development web server. However, this is not recommended for production environments. Python script This is the most general way of executing your application. It will be used in the case where you specify a path to a Python script via the APP_FILE environment variable, defaulting to a file named app.py if it exists. The script is passed to a regular Python interpreter to launch your application. Hot deploy If you are using Django, hot deploy will work out of the box. To enable hot deploy while using Gunicorn, make sure you have a Gunicorn configuration file inside your repository with the reload option set to true. Make sure to specify your config via the APP_CONFIG environment variable. To change your source code in running container, use Docker's exec command: docker exec -it <CONTAINER_ID> /bin/bash After you enter into the running container, your current directory is set to /opt/app-root/src, where the source code is located. APPHOME=foldercontaining_manage.py
===== Building ===== <code> s2i build –loglevel=3 anthenao centos/python-27-centos7 anthenao_web docker run -p 8080:8080 sample-app #make test docker images REPOSITORY TAG IMAGE ID CREATED SIZE anthenaoweb latest 7bafa62cc3e3 21 minutes ago 537.7 MB docker.io/centos/python-27-centos7 latest 3cd4e07825e3 4 days ago 537.6 MB docker.io/centos/s2i-base-centos7 latest d70e9ec1b5cd 5 days ago 383.1 MB docker.io/hello-world latest c54a2cc56cbb 5 months ago 1.848 kB [fedora@default-host anthenaoweb]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES abddebaf9d9f anthenaoweb “container-entrypoint” 21 minutes ago Exited (0) 19 minutes ago furiousdavinci d225a390693f hello-world “/hello” 2 days ago Exited (0) 2 days ago loving_fermat docker rm containerid docker rmi imageid </code> ===== Troubleshooting ===== ==== pip cannot install requirements.txt ==== Looks like dns not working. Restart docker: systemctl restart docker.service
==== docker rm ==== docker rm 3a15809430ca_db Error response from daemon: Driver overlay failed to remove root filesystem 3a15809430ca0a124e7aeaa41157cd72d0311f6e3503010605fd6e2061d57863: remove /var/lib/docker/overlay/880ba348df25857c620932b5a59a181eb6cc315fbfdde98e928d45504f8fecb1/merged: device or resource busy Solution: docker rm -f xxxx # removed but that error still appear umount /var/lib/docker/devicemapper/mnt/656cfd09aee399c8ae8c8d3e735fe48d70be6672773616e15579c8de18e2a3b3