Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Fixed it with these simpler steps ;)Reset Shared Drives in Docker for Windows. (Re-entering your credentials if needed by using the reset credentials link)Clean your VS solution and rebuildDebug
I was perfectly running an ASP.NET Core project in a docker container, but then I created another project in the same solution, which was referenced by the first one.When building, VS 2017 didn't complain. When debugging, VS says:"Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))"Then I tried creating a new solution with new project(Only one this time). Same thing happened: Build successfull, debugging - impossible. Restarting computer did't work, neither VS with admin privilages.How can I fix that? I am ready to screw the whole project and start all over, if needed. I appriciate any response. Thanks in advance.
Visual Studio 2017 HRESULT: 0x80004004
get an account on docker hub.https://hub.docker.com/account/signup/once signed up (only do that once), you log in from the host that has the image you want to push:docker login (login with your username, password, and email address)then you push your image up there. you probably will need to tag it first. say you created a new account called mynewacc, first, you tag your image:docker tag ubuntu-dev-update-15 mynewacc/ubuntu-dev-update-15then push the image up to your docker hub:docker push mynewacc/ubuntu-dev-update-15now anybody else with docker can pull your image down:docker pull mynewacc/ubuntu-dev-update-15then to run the image:docker run -it mynewacc/ubuntu-dev-update-15 /bin/bashyou can skip the pull step, if the image doesn't exist it will be pulled anyway. the pull guarantees you get the freshest one.
I have a 5GB docker image named "ubuntu-dev-update-15", which I developed on my local Ubuntu 14 dev machine. In that image I have everything I need to do my development work. Now I need to be able to send this image to a different linux host. What is the procedure for doing that?
how to package a docker image in a single file
Great question. I did some research and, as far as I know, it's not possible with the current Docker/Moby implementation. It's also a problem for other properties as well, as you can see here (the issue is from 2014!):https://github.com/moby/moby/issues/3465I know it's really annoying, but, if you really want to remove that you can try following this:https://github.com/moby/moby/issues/3465#issuecomment-383416201The person automatized this process with aPythonscript that seems to let you do what you want:https://github.com/gdraheim/docker-copyeditIt appears to have theRemove Labeloperation (https://github.com/gdraheim/docker-copyedit/blob/92091ed4d7a91fda2de39eb3ded8dd280fe61a35/docker-copyedit.py#L304), that is what you want.I don't know if it works (I haven't had time to test that), but I think it's worth trying.
I've a Dockerfile starting with theofficial nginx image.FROM nginxAnd they set themaintainer label.LABEL maintainer="NGINX Docker Maintainers <[email protected]>"So, now my image appears to also be maintained by them.$ docker image inspect example-nginx ... "Labels": { "maintainer": "NGINX Docker Maintainers <[email protected]>" },Thedocumentationmentionshow to overwrite the label. But, so far, the best I can do is set it to an empty value.LABEL maintainer=$ docker image inspect example-nginx ... "Labels": { "maintainer": "" },How do I completelyremoveorunseta label set by a parent image?
How do I unset a Docker image label?
Is this what you looking for?docker image inspect --format "{{.ID}} {{.RepoTags}} {{.Architecture}}" $(docker image ls -q)output:sha256:fb495265b81ffaa0d3bf8887231b954a1139b2e3d611c3ac3ddeaff72313909c [postgres:10.11-alpine] amd64Explanation:$(docker image ls -q)→ pass all image IDs as parameters to inspect commanddocker image inspectprint detailed info about image--format "{{.ID}} {{.RepoTags}} {{.Architecture}}"→print only necessary datainstead of full JSONAlso it is possible to add pipe withgrep, like{inspect command} | egrep 'amd64$'to print onlyamd64architecture for example
I own a Mac M1 and I run Docker on it. On OSX, Docker can run native ARM images but also emulate x86/amd64 to run images that were not built for ARM.My question is simple: From the command line, I am trying to find an extension of the command 'docker image ls' which displays the imageplatform.$ docker image lsREPOSITORY TAGPLATFORMIMAGE ID CREATED SIZE.............................arm64.............................x86I already saw this answer:How to filter Docker images by platform?but it does NOT answer the question. OS and PLATFORM are two different things.Thank you
List Docker images for ARM from the CLI
I found this old issue today using docker composer. Python logging module checks the output is a terminal so you need to addtty: trueto the service. Example:version: '2' services: django: tty: true command: python -u manage.py runserver 0.0.0.0:8080 ports: - "8080:8080"
I'm writing a Dockerfile which needs to run multiple commands as part of theCMDinstruction and I thought the right way to do this would be to run a shell script with the main daemon executed viaexec. Unfortunately, as part of that process some of my output (stdout? stderr? I don't know, and I don't know how to find out) gets lost.Here's the shell script:#!/bin/sh python manage.py migrate exec python manage.py runserver 0.0.0.0:8000The idea being that themigratecommand is just run once and its output shown, and then therunservercommand should take over and the container runs until that process exits.The actual problem is that the output ofmigrateis displayed correctly, but the immediate output ofrunserveris not shown. Strangely, later request logging ofrunserveris shown just fine.To clarify, here's the output I expected:[...] No migrations to apply. [...] Starting development server at http://0.0.0.0:8000/ Quit the server with CONTROL-C. [21/Jan/2015 16:27:06] "GET / HTTP/1.1" 200 15829Here's what I'm getting withfig up:[...] No migrations to apply. [...] [21/Jan/2015 16:27:06] "GET / HTTP/1.1" 200 15829I'm not even sure who's fault this is. Does therunservercommand change its output depending on how it is run? Is it a problem withexec? Is it docker/fig?As one additional data point, I noticed that I do get all the output when running the container withfig run web, but not when I dofig up, but I don't understand how that's different or relevant.Note: sorry for the tag spam, I'll reduce the tags once I know what actually causes this effect.
When running a Django dev server with docker/fig, why is some of the log output hidden?
apt-getgenerally needs to run as root, but once you've run aUSERcommand, commands don't run as root any more.You'll frequently run commands like this at thestartof the Dockerfile: you want to take advantage of Docker layer caching if you can, and you'll usually be installing dependencies the rest of the Dockerfile needs. Also for layer-caching reasons, it's important to runapt-get updateand other installation steps in a single step. So your Dockerfile would typically look likeFROM ros:kinetic-robot-xenial # Still root RUN apt-get update \ && apt-get install ... # Copy in application (still as root, won't be writable by other users) COPY ... CMD ["..."] # Now as the last step create a user and default to running as it RUN adduser ros USER rosIf you need to, you can explicitlyUSER rootto switch back to root for subsequent commands, but it's usually easier to read and maintain Dockerfiles with less user switching.Also note that neithersudonor user passwords are really useful in Docker. It's hard to runsudoin a script just in general and a lot of Docker things happen in scripts. Containers also almost never run things likegettyorsshdthat could potentially accept user passwords, and they're trivial to read back fromdocker history, so there's no point in setting one. Conversely, if you're in a position to get a shell in a container, you can always pass-u rootto thedocker runordocker execcommand to get a root shell.
I am learning to use Docker with ROS, and I am surprised by this error message:FROM ros:kinetic-robot-xenial # create non-root user ENV USERNAME ros RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME RUN bash -c 'echo $USERNAME:ros | chpasswd' ENV HOME /home/$USERNAME USER $USERNAME RUN apt-get updateGives this error messageStep 7/7 : RUN apt-get update ---> Running in 95c40d1faadc Reading package lists... E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied) The command '/bin/sh -c apt-get update' returned a non-zero code: 100
Why does simple Dockerfile give "permission denied"?
Using your Dockerfile with my project I added a line before the last one as follows:RUN poetry config installer.max-workers 10 RUN poetry install --no-interaction --no-ansi -vvvIt worked for me!
I cannot build my docker image. It throws "Connection is full" error when installing dependencies via Poetry. This does not happen on my host machine. How can I solve this. Do I need to increase the pool size? If yes, how?My DockerfileFROM python:3.10-alpine AS python ENV PYTHONUNBUFFERED=true WORKDIR /app FROM python as poetry ENV POETRY_HOME=/opt/poetry ENV POETRY_VIRTUALENVS_IN_PROJECT=true ENV PATH="$POETRY_HOME/bin:$PATH" RUN python -c 'from urllib.request import urlopen; print(urlopen("https://install.python-poetry.org").read().decode())' | python - COPY pyproject.toml poetry.lock ./ RUN poetry install --no-interaction --no-ansi -vvv
Poetry install throws "Connection pool is full, discarding connection: pypi.org. Connection pool size: 10" error when building Docker image
You need to advertise yourKafkabroker askafka, which is the effective hostname for all linking containers (i.e. the hostname that the client needs to connect to from theKafkaprotocol perspective, and sokafka:9092is correct, not0.0.0.0):kafka: ... environment: KAFKA_ADVERTISED_HOST_NAME: kafka
I'm trying to use microservicesSpring BootwithKafka, but mySpring Bootcontainers can not connect to theKafkacontainer.docker-compose.yml:version: '3' services: zookeeper: image: wurstmeister/zookeeper container_name: zookeeper restart: always ports: - 2181:2181 kafka: image: wurstmeister/kafka container_name: kafka restart: always ports: - 9092:9092 depends_on: - zookeeper links: - zookeeper:zookeeper environment: KAFKA_ADVERTISED_HOST_NAME: localhost KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 consumer: image: consumer container_name: consumer depends_on: - kafka restart: always ports: - 8084:8080 depends_on: - kafka links: - kafka:kafka producer: image: producer container_name: producer depends_on: - kafka restart: always ports: - 8085:8080 depends_on: - kafka links: - kafka:kafkaapplication.propertiesin Consumer:spring.kafka.consumer.bootstrap-servers=kafka:9092 spring.kafka.consumer.group-id=WorkUnitApp spring.kafka.consumer.topic=kafka_topicapplication.propertiesin Producer:spring.kafka.producer.bootstrap-servers=kafka:9092But if I run theKafkain a container and theSpring Bootmicroservices locally it works.application.propertiesin Consumer:spring.kafka.consumer.bootstrap-servers=0.0.0.0:9092 spring.kafka.consumer.group-id=WorkUnitApp spring.kafka.consumer.topic=kafka_topicapplication.propertiesin Producer:spring.kafka.producer.bootstrap-servers=0.0.0.0:9092What's the problem, why does thelinksfrom thedockernot work ?p.s.0.0.0.0becausemac osEditedI added indocker-compose.ymlenvironments tokafkabut it still does not work either- KAFKA_ADVERTISED_PORT=9092
Spring Boot containers can not connect to the Kafka container
But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.Actually, they can. You are right thattcp://localhost:61001will not work, because usinglocalhostwithin a container would be referring to the container itself, similar to howlocalhostworks on any system by default. This means that your services cannot share the same host. If you want them to, you can use one container for both services, although this really isn't the best design since it defeats one of the main purposes of Docker Compose.The ideal way to do it is with docker-compose links, the guide you referenced shows how to define them, but to actuallyusethem you need to use the linked container's name in URLs as if the linked container's name had an IP mapping defined in the original container's/etc/hosts(not that it actually does, but just so you get the idea). If you want to change it to be something different from the name of the linked container, you can use a link alias, which are explained in the same guide you referenced.For example, with adocker-compose.ymlfile like this:a: expose: - "9999" b: links: - aWithalistening on0.0.0.0:9999,bcan interact withaby making requests from withinbtotcp://a:9999. It would also be possible to shell intoband runping awhich would send ping requests to theacontainer from thebcontainer.So in conclusion, try replacinglocalhostin the request URL with the literal name of the linked container (or the link alias, if the link is defined with an alias). That means thattcp://:61001should work instead oftcp://localhost:61001Just make sure you define the link indocker-compose.yml.Hope this helps
We're dockerizing our micro services app, and I ran into some discovery issues.The app is configured as follows:When the a service is started in 'non-local' mode, it uses Consul as its Discovery registry. When a service is started in 'local' mode, it automatically binds an address per service (For example, tcp://localhost:61001, tcp://localhost:61002 and so on. Hard coded addresses)After dockerizing the app (for local mode only, for now) each service is a container (Docker images orchestrated with docker-compose. And with docker-machine, if that matters) But one service can not interact with another service since they are not on the same machine and tcp://localhost:61001 will obviously not work.Using docker-compose withlinksand specifying localhost as an alias (service:localhost) didn't work. Is there a way for 2 containers to "share" the same localhost?If not, what is the best way to approach this? I thought about using specific hostname per service, and then specify the hostname in the links section of the docker-compose. (But I doubt that this is the elegant solution) Or maybe use a dockerized version of Consul and integrate with it?This post:How to share localhost between two different Docker containers?provided some insights about why localhost shouldn't be messed with - but I'm still quite puzzled on what's the correct approach here.Thanks!
Can (or should) 2 docker containers interact with each other via localhost?
Volumes are treated as mounts in Docker, which means the host directory will always be mounted over the container's directory. In other words, what you're trying to do isn't currently possible with Docker volumes.See this Github issue for a discussion on this subject:https://github.com/docker/docker/issues/4361One possible work-around would be to have a docker volume to an empty directory in your container, and then in your Docker RUN command (or start-up script), copy the static contents into that empty directory that is mounted as a volume.
I have a docker container that holds a django app. The static files are produced and copied to a static folder.container folder hierarchy:- var - django - app - staticbefore i build the docker image, i run./manage.py collectstaticso the static files are in the/var/django/staticfolder. To expose the app and serve the static files, i have on the host an nginx. The problem is that if i do a volume between the static folder and a designated folder on the host, when i run the docker container, the/var/django/staticfolder in the container gets deleted (well, not deleted but mounted). Is there any way to overcome this? as in set the volume but tell docker to take the current files as well?
expose files from docker container to host
You have to installlibltdl-devin order to get everything working correctly. Create aDockerfilethat looks like this:FROM jenkins:latest USER root RUN apt-get update \ && apt-get upgrade -y \ && apt-get install -y sudo libltdl-dev \ && rm -rf /var/lib/apt/lists/* RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers USER jenkins # Here you can install some Jenkins plugins if you want
I want to run Jenkins in Docker container. Everything is OK. I can run it like this:docker run -d --name jenkins -t -i -p 49001:8080 jenkinsI can also add persistent storage. The problem came when I created a pipeline can has to executedockercommands (buildandpush). First the error was that docker wasn't installed on the system. Yes, expected. Then I started searching and found out how I can run docker in container (passing 2 persistent volumes):docker run ... -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker -p 49001:8080 jenkinsThis runs, but with some exceptions. There isdockercommand in the container but when I try to run it, it throws an exception:docker: error while loading shared libraries: libltdl.so.7: cannot open shared object file: No such file or directoryHow can I fix this problem? What is the correct way for installing Jenkins in Docker and run Docker in it? I think there are 2 ways:The one that I am doing - use the socketsI can expose the docker api that allows connections and running commandsActually is it worth running Jenkins in Docker? I tried to install the missing lib manually from theapt-getIt works but I know that it's not the correct way..
Jenkins in Docker container (run docker pipeline)
Yes, you can run gitlab-ce on windows using Docker. First, make sure docker is installed on Windows, otherwiseinstall it.A detailed documentation for how to run gitlab using Docker is found underGitLab Docker imagesincluding how to access the web interface.
I searched a lot and found thatGitLab Community Editionis not installed onWindowsso now I want to install it with the help of Docker. I do not know that is it possible and how I can do it ?
install gitlab on Windows with Docker
It is possible to specify the size limit while creating the docker volume usingsizeas per thedocumentationHere is example command provided in the documentation to specify the samedocker volume create -d flocker -o size=20GB my-named-volumeUPDATESome more examples fromgit repository:The built-inlocaldriver on Linux accepts options similar to the linuxmountcommand:$ docker volume create --driver local --opt type=tmpfs --opt device=tmpfs --opt o=size=100m,uid=1000Another example:$ docker volume create --driver local --opt type=btrfs --opt device=/dev/sda2
If I would like to create a data volume of let´s say 15GB that would be of type ext4, how would I do that?docker volume create --name voljust creates an empty volume.docker volume create --opt type=ext4 --name volcreates an ext4 volume but I cannot specify the size of it since ext4 does not support it according to the mount options of ext4.
How to specify the size of a shared Docker volume?
The default capability set granted to containers does not allow a container to modify network settings. By running in privileged mode, you grant all capabilities to the container -- but there is also an option to grant individual capabilities as needed. In this case, the one you require is called CAP_NET_ADMIN (full list here:http://man7.org/linux/man-pages/man7/capabilities.7.html), so you could add--cap-add NET_ADMINto your docker run command.Make sure to use that option when starting both containers, since they both require some network adjustments to enable transparent packet interception.In the "proxy" container, configure the iptables pre-routing NAT rule according to themitmproxytransparent mode instructions, then startmitmproxy(with the-Tflag to enable transparent mode). I use a small start script as the proxy image's entry point for this since network settings changes occur at container runtime only and cannot be specified in a Dockerfile or otherwise persisted.In the "client" container, just useip routecommands to change the default gateway to the proxy container's IP address on the docker bridge. If this is a setup you'll be repeating regularly, consider using an entry point script on the client image that will set this up for you automatically when the container starts. Container linking makes that easier: you can start the proxy container, and link it when starting the client container. Then the client entry point script has access to the proxy container's IP via an environment variable.By the way, if you can get away with using mitmproxy in non-transparent mode (configure the client explicitly to use an HTTP proxy), I'd highly recommend it. It's much less of a headache to set up.Good luck, have fun!
I'm trying to route all traffic of a docker container throughmitmproxyrunning in another docker container. In order formitmproxyto work, I have to change the gateway IP of the original docker container.Here is an example of what I want to do, but I want to restrict this to be entirely insidedockercontainers.Any thoughts on how I might be able to do this? Also, I want to avoid running either of the two docker containers in privileged mode.
Running docker container through mitmproxy
Make sure the dns is set properly. I had some issues that were gone after thedockerservice restart. If it didn't help you may want to use the--dns 8.8.8.8docker switch.Restarting docker service:on systemd architecture -sudo systemctl restart dockerboot2docker -boot2docker restartDocker Machine -docker-machine restart Also, I did something similar (nodejs image), but I've used another base image, feel free to use whatever you need from myrepo.
I am deploying a simple node.js app on a digital ocean server 's docker platform.// package.json{ "name": "docker-centos-hello", "private": true, "version": "0.0.1", "description": "Node.js Hello world app on CentOS using docker", "author": "Daniel Gasienica <[email protected]>", "dependencies": { "express": "3.2.4" } }// app.jsvar express = require('express'); var PORT = 3000; var app = express(); app.get('/', function (req, res) { res.send('Hello world\n'); }); app.listen(PORT); console.log('Running on http://localhost:' + PORT);// docker fileFROM dockerfile/nodejs # Set the working directory WORKDIR /src EXPOSE 3000 CMD ["/bin/bash"]The docker base image is the dockerfile/nodejs, which has built a node.js, I built the image:docker build -t test1 /home/sizhe/docker/testand run the image:docker run -it -p 8080:3000 -v /home/sizhe/docker/test1:/src testBy running the image, I can successfully into the container, the app files are all copied into the container. when I tried the command to install the node.js dependency:npm installHowever, npm can't download all the packages, with a error:Linux 3.13.0-40-generic npm ERR! argv "node" "/usr/local/bin/npm" "install" npm ERR! node v0.10.35 npm ERR! npm v2.1.16 npm ERR! code ENOTFOUND npm ERR! errno ENOTFOUND npm ERR! syscall getaddrinfo npm ERR! network getaddrinfo ENOTFOUND npm ERR! network This is most likely not a problem with npm itself npm ERR! network and is related to network connectivity. npm ERR! network In most cases you are behind a proxy or have bad network settings. npm ERR! network npm ERR! network If you are behind a proxy, please make sure that the npm ERR! network 'proxy' config is set properly. See: 'npm help config'
can't install npm in the docker container?
tldr; This is attydefault behaviour and unrelated to docker. Per theticket filed on github about your exact issue.Quoting the relevant comments in that ticket:Looks like this is indeed TTY by default translates newlines to CRLF$ docker run -t --rm debian sh -c "echo -n '\n'" | od -c 0000000 \r \n 0000002disabling "translate newline to carriage return-newline" with stty -onlcr correctly gives;$ docker run -t --rm debian sh -c "stty -onlcr && echo -n '\n'" | od -c 0000000 \n 0000001Default TTY options seem to be set by the kernel ... On my linux host it contains:/* * Defaults on "first" open. */ #define TTYDEF_IFLAG (BRKINT | ISTRIP | ICRNL | IMAXBEL | IXON | IXANY) #define TTYDEF_OFLAG (OPOST | ONLCR | XTABS) #define TTYDEF_LFLAG (ECHO | ICANON | ISIG | IEXTEN | ECHOE|ECHOKE|ECHOCTL) #define TTYDEF_CFLAG (CREAD | CS7 | PARENB | HUPCL) #define TTYDEF_SPEED (B9600)ONLCRis indeed there.When we go looking at theONLCRflag documentation, we can see that:[-]onlcr: translate newline to carriage return-newlineTo again quote thegithub ticket:Moral of the story, don't use -t unless you want a TTY.TTY line endings are CRLF, this is not Docker's doing.
I'm using Docker client Version: 18.09.2.When I run start a container interactively and run adatecommand, then pipe its output tohexdumpfor inspection, I'm seeing a trailing\nas expected:$ docker run --rm -i -t alpine / # date | hexdump -c 0000000 T h u M a r 7 0 0 : 1 5 0000010 : 0 6 U T C 2 0 1 9 \n 000001dHowever, when I pass thedatecommand as an entrypoint directly and run the container, I get a\r\nevery time there's a new line in the output.$ docker run --rm -i -t --entrypoint=date alpine | hexdump -c 0000000 T h u M a r 7 0 0 : 1 6 0000010 : 1 9 U T C 2 0 1 9 \r \n 000001eThis is weird.It totally doesn't happen when I omit-t(not allocating any TTY):docker run --rm -i --entrypoint=date alpine | hexdump -c 0000000 T h u M a r 7 0 0 : 1 7 0000010 : 3 0 U T C 2 0 1 9 \n 000001dWhat's happening here?This sounds dangerous, as I usedocker runcommand in my scripts, and if I forget to omit-tfrom my scripts, the output I'll collect fromdocker runcommand will haveinvisible/non-printible\rcharacters which can cause all sorts of issues.
Why do "docker run -t" outputs include \r in the command output?
I am running into the exact same issue. I did find that flushing stdout causes the logging to appear when it otherwise would not. Looks like a bug in Cloud Run to me.print(json.dumps(entry)) import sys sys.stdout.flush()Output with flushing
I followed this guidehttps://firebase.google.com/docs/hosting/cloud-runto setup cloud run docker. Then I tried to follow this guidehttps://cloud.google.com/run/docs/loggingto perform a simple log. Trying to write a structured log to stdout This is my code:trace_header = request.headers.get('X-Cloud-Trace-Context') if trace_header: trace = trace_header.split('/') global_log_fields['logging.googleapis.com/trace'] = "projects/sp-64d90/traces/" + trace[0] # Complete a structured log entry. entry = dict(severity='NOTICE', message='This is the default display field.', # Log viewer accesses 'component' as jsonPayload.component'. component='arbitrary-property', **global_log_fields) print(json.dumps(entry))I cannot see this log in the Cloud Logs Viewer. I do see the http Get logs each time I call the docker. Am I missing anything? I am new to this and wondered what is the simples way to be able to log information and view it assuming the docker I created was exactly with the steps from the guide (https://firebase.google.com/docs/hosting/cloud-run)Thanks
Simplest way to perform logging from Google Cloud Run
Because Docker is based on Linux, it cannot run directly on Windows/OS X. Instead, it runs inside a VirtualBox virtual machine (a Docker Machine) that runs a Linux operating system. That's why when you install Docker Toolbox you see that VirtualBox is installed.To see files and folders inside this virtual machine, usedocker-machine ssh defaultdefaultis the name of the default Docker Machine.
I'm running Docker 1.11 on OS X and I'm trying to figure out where my local volumes are being written. I created a Docker volume by runningdocker volume create --name mysql. I then randocker volume inspect mysqland it output the following:[ { "Name": "mysql", "Driver": "local", "Mountpoint": "/mnt/sda1/var/lib/docker/volumes/mysql/_data", "Labels": {} } ]The issue is/mnt/sda1/var/lib/docker/volumes/mysql/_datadoesn't actually exist on my machine. I thought maybe the issue was that it didn't actually get created until it was used by a container so I started a container by runningdocker run --name mysql -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=mysql -P -d mysql:5.7and then created a database in MySQL, but the mount point still doesn't exist. I even randocker inspect mysqlto ensure it's using the correct volume and got the following:... "Mounts": [ { "Name": "mysql", "Source": "/mnt/sda1/var/lib/docker/volumes/mysql/_data", "Destination": "/var/lib/mysql", "Driver": "local", "Mode": "z", "RW": true, "Propagation": "rprivate" } ], ...At this point I'm completely lost as to where the data is being written. What am I missing?
Docker volume mount doesn't exist
It looks like for your case database is overhead. You just need some distribute lightweight key-value storage with shared key lock support. Here are some candidates:etcd (https://coreos.com/etcd)consul (https://www.consul.io, especiallyhttps://www.consul.io/docs/commands/lock.html)redis (http://redis.io)
I have a simple C++ service (API endpoint) that increases a counter every time the API is called. When the caller posts data tohttp://10.0.0.1/addthe counter has to be incremented by 1 and return the value of the counter to the caller.Things get more complicated when the service is getting dockerized. When two instances of the same service run the addition has to be done atomically, ie the counter value is stored in a database and each docker instance has to acquire a lock get the old value, add one, return to the caller and unlock.When the instances are processes in the same Linux machine, we used shared memory to efficiently lock, read, write and unlock the shared data and the performance was accepted. However when we use dockers and a database the performance is low. The results are OK, however the performance is low.What is the canonical way between instances of dockerized properties to perform operations like the one described above? Is there a "shared memory" feature for containerized processes?
How to atomically update a counter shared between Docker instances
In the Dockerfile, add a local file using ADD, e gADD your-local.jar /some-container-locationYou could use volumes to put a file in the container in runtime, e gVOLUME /copy-into-this-dirAnd then you run usingdocker run -v=/location/of/file/locally:/copy-into-this-dir -t me/java7You can use ENTRYPOINT and CMD to pass arguments, e gENTRYPOINT ["java", "-jar", "/whatever/your.jar"] CMD [""]And then again run usingdocker run -v=/location/of/file/locally:/copy-into-this-dir -t me/java7 --myNumber 42(Have a look at theDockerfile documentation.)
I have a java application (jar file) that I want to be able to run from a docker image.I have created a Dockerfile to create an image using centos as base and install java as such:Dockerfile FROM centos RUN yum install -y java-1.7.0-openjdkI randocker build -t me/java7after to obtain the image me/java7however I am stuck at some dead ends.How do I copy the jar file from the host into the image/containerI require 2 parameters. 1 is a file, which needs to be copied into a directory into the container at runtime. The other is a number which needs to be passed to the jar file in thejava -jarcommand automatically when the user runsdocker runwith the parametersExtra Notes:The jar file is a local file. Not hosted anywhere accessible via wget or anything. The closest I have at the moment is a windows share containing it. I could also access the source from a git repository but that would involve compiling everything and installing maven and git on the image so I'd rather avoid that.any help is much appreciated.
How to create docker image for local application taking file and value parameters
The.envfile in the project root, and theenv_file:field in the Compose file are two different concepts.The.envis for settings a default environment forCompose. Values set in this file can be used within the Compose file.Theenv_file:field is for setting the default environment for acontainer. Values set in this can be used in the container, but not in the Compose file.Seehttps://docs.docker.com/compose/env-file/for more information.
Here's my compose filedev.ymlversion: '2' volumes: rethinkdb_data_dev: {} services: rethinkdb: image: rethinkdb:latest volumes: - rethinkdb_data_dev:/home/rethinkdb_data rabbitmq: image: rabbitmq:latest fumio: build: context: . dockerfile: ./compose/fumio_dev/Dockerfile depends_on: - rethinkdb - rabbitmq links: - rethinkdb - rabbitmq env_file: ./compose/fumio_dev/dev.env environment: - GIRLFRIEND_FUMIO_CONFIG=development - GIRLFRIEND_FUMIO_NOSQLDATABASE_HOST=rethinkdb ports: - "${GIRLFRIEND_FUMIO_PORT}:8001"Theenvironmentinside thedev.ymlfile is intentional, so I can override them inside withdev.envif needed.Mydev.envfile, located insidecompose/fumio_dev/folder, relative to thedev.ymlfile.GIRLFRIEND_FUMIO_PORT=8000Here's what happens when I rundocker-compose -f dev.yml buildIf I provide.envfile in the root folder, it runs fine, docker-compose ignores theenv_file's value and try to use the default.envinstead. So docker-compose env_file is somehow not working as intended or I'm missing something?My docker-compose version is 1.8.0, I downgraded it to 1.7.1 but still no luck (installed using pip).
docker-compose not recognizing env_file file/location, and still tries to use the default .env
This kind of asterisk expansion is done by the command line processor - the shell - and you circumvent that by invoking java directly.Much the same way as commands should be invoked with "CMD /C" under Windows to get full treatment.Invoke/bin/shinstead.
I have aDockerfilewith the followingCMDto start my spring boot app:FROM java:8-jre # ... CMD ["java", "-jar", "/app/file*.jar"]When I try to start a container from the created image I get:Error: Unable to access jarfile /app/file*.jarBut when I override theCMDwhile starting the container and execute the command in the container everything works fine:docker run -it bash root@:/app# java -jar /app/file*.jar Is it possible to use wildcards withjava -jarcommand using docker CMDs? Please don't tell me not to use wildcards. I want to use it cause of reasons ;-)UpdateBased on the answer I was able to fix it:CMD ["/bin/sh", "-c", "java -jar /app/file*.jar"]
Why does wildcard for jar execution not work in docker CMD?
You can pass a space-separated string to builds then convert string to an array or just loop over the string.DockerfileFROM alpine ARG items RUN for item in $items; do \ echo "$item"; \ done;pass value during build timedocker build --build-arg items="item1 item2 item3 item4" -t my_image .outputStep 3/3 : RUN for item in $items; do echo "$item"; done; ---> Running in bee1fd1dd3c6 item1 item2 item3
I am calling the docker build command from a shell script and I wanna pass an array in the build args .. First question can I really do that ? If yes then how do i iterate inside the docker file. A Small example will really help.
How can I pass array into a dockerfile and loop through it?
You should do a Robomongo SSH tunnel connection to MongoDB inside docker container. First of all you should install a ssh server inside your docker container.https://docs.docker.com/engine/examples/running_ssh_service/After that you should configure your connection in Robomongo. Inside "Connection Settings" there are configuration tabs of your Robomongo Connection.Go to "SSH" Tab and configure your SSH connection to the docker container. After that go to "Connection" Tab and configure your connection to MongoDB as if it was in localhost scope.
I'm running aNodeJSApp withdocker-compose. Everything works fine and I can see all my data by connecting to Mongo inside container. But when I connect toRoboMongoI don't see any data.How can I deal with this problem?
connect robomongo to mongoDB docker container
There's no way to export a variable from a script to a child image. As a general rule, environment variables travel down, never up to a parent.ENVwill persist in the build environment and to child images and containers.DockerfileFROM busybox ENV PLATFORM_HOME test RUN echo $PLATFORM_HOMEDockerfile.childFROM me/platform RUN echo $PLATFORM_HOME CMD ["sh", "-c", "echo $PLATFORM_HOME"]Build the parentdocker build -t me/platform .Then build the child:→ docker build -f Dockerfile.child -t me/platform-test . Sending build context to Docker daemon 3.072kB Step 1/3 : FROM me/platform ---> 539b52190af4 Step 2/3 : RUN echo $PLATFORM_HOME ---> Using cache ---> 40e0bfa872ed Step 3/3 : CMD sh -c echo $PLATFORM_HOME ---> Using cache ---> 0c0e842f99fd Successfully built 0c0e842f99fd Successfully tagged me/platform-test:latestThen run→ docker run --rm me/platform-test test
I've search some of the questions already likedocker ENV vs RUN export, which explains differences between those commands, but didn't help in solving my problem.For example I have a script calledmyscript:#!/bin/bash export PLATFORM_HOME="$(pwd)"And have following lines in Dockerfile:... COPY myscript.sh / RUN ./myscript.shI've also tried to use ENTRYPOINT instead of RUN or declaring variable before calling the script, all that with no success.What I want to achieve is thatPLATFORM_HOMEcan be referenced from other Dockerfiles which use that one as a base. How to do it ?
docker run script which exports env variables
Unfortunatelyhostnetworking isnot availableforDocker for Windowsandneither ismacvlannetworking. If you are not stuck onHyper-V, consider usingDocker Toolboxon Windows.Quoted from:https://docs.docker.com/network/network-tutorial-host/#prerequisitesThe host networking driver only works on Linux hosts, and is not supported on Docker for Mac, Docker for Windows, or Docker EE for Windows Server.Also see the following related issues on GitHub:https://github.com/docker/for-win/issues/1644https://github.com/docker/for-win/issues/937https://github.com/docker/for-win/issues/543https://github.com/docker/for-mac/issues/2716
I have windows 10 pro and I'm trying to run a docker with network mode host.my issue is that I can't run a docker and access it using the host ip not 127.0.0.1 and not the ip (in linux it works differently).looks like the hyper v has it's own network that not accessible using the host ipdocker run -d --network=host nginxoutput:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 8edd86bf292b nginx "nginx -g 'daemon of…" 3 seconds ago Up 2 seconds happy_curieso there is no ports as expected but and no errors. When I'm trying to open the browser using 127.0.0.1 I'm gettingERR_CONNECTION_REFUSEDif I set ports to instead of network mode host it is workingdocker run -d -p 80:80 nginxHyper v Ethernet adapter vEthernet (DockerNAT):Connection-specific DNS Suffix . : IPv4 Address. . . . . . . . . . . : 10.0.75.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . .Remarks:changing in the hyper v virtual switch manager the network to be external - not helpingfirewall is disabledany idea how to work with network mode host in windows?
windows run docker with --network=host and access with 127.0.0.1
So this regular expression:[a-z0-9]+(?:[._-][a-z0-9]+)*doesn't include any upper case letters. So you should change your image name todevopsclient
I am trying to build my image using this plugin:https://github.com/spotify/docker-maven-plugin#use-a-dockerfileWhen I runmvn clean package docker:buildI get this error:[ERROR] Failed to execute goal com.spotify:docker-maven-plugin:0.2.3:build (defa ult-cli) on project demo: Exception caught: Request error: POST https://192.168. 99.100:2376/v1.12/build?t=DevOpsClient: 500: HTTP 500 Internal Server Error -> [ Help 1]When I check the docker daemon logs, I see this:Handler for POST /build returned error: repository name component must match \"[a-z0-9]+(?:[._-][a-z0-9]+)*\"" statusCode=500Here is the doc for the naming convention:https://docs.docker.com/registry/spec/api/Apparently you cannot have any upper case letters.I am trying to build using Spring boot my following this guide:https://spring.io/guides/gs/spring-boot-docker/I am using a SNAPSHOT release of spring boot and I have a directory named demo-0.1.1-SNAPSHOT. I believe this may be causing the problem.Also I am working on windows and my project directory path is like:C:\Users\myname\UserRegistrationClient\git\..... etcWould this also affect the repository naming convention?And how would I change it?
docker repository name component must match
Permission denied on a default install indicates you are trying to access the socket from a user other than root or that is not in the docker group. You should be able to run:sudo usermod -a -G docker $usernameon your desired $username to add them to the group. You'll need to logout and back in for this to take effect (usenewgrp dockerin an existing shell, or restart the daemon if this is an external service accessing docker like your cgi scripts).Note that doing this effectively gives that user full root access on your host, so do this with care.
I am trying to run Python CGI script inside which I need to run docker image. I am using Docker version 1.6.2. user is "www-data", which is added in docker group.www-data : www-data sudo dockerOn machine, with www-data I am able to execute docker commandswww-data@mytest:~/html/new$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESI am getting following error while running docker image from Python CGI script:fatal msg="Get http:///var/run/docker.sock/v1.18/images/json: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?"Is there anything I am missing here?
/var/run/docker.sock: permission denied while running docker within Python CGI script
You are correct, unless there is some special routing happening, the fact that the containers are running means your services are available.You can see the ports being exposed from thedocker ps -acommand:CONTAINER_ID: 560f78689902 IMAGE: moviedecisionweb:dev COMMAND: "C:\\remote_debugger\\…" CREATED: About a minute ago STATUS: Up About a minute PORTS: 0.0.0.0:52002->80/tcp, 0.0.0.0:52001->443/tcp NAMES: mdweb CONTAINER_ID: 1cd7f72426fe IMAGE: moviedecisionapi:dev COMMAND: "C:\\remote_debugger\\…" CREATED: About a minute ago STATUS: Up About a minute PORTS: 0.0.0.0:52005->80/tcp, 0.0.0.0:52004->443/tcp NAMES: mdapiBased on the provided output, you have two docker containers running.I'm assuming the ports 80 & 443 are serving the HTTP & HTTPS services (respectively) from your app/s.Based on this...For container "mdweb", you should be able to access the docker services from your docker host machine (PC) via:http://localhost:52002https://localhost:52001For container "mdapi", you should be able to access the docker services from your docker host machine (PC) via:http://localhost:52005https://localhost:52004I believe you can uselocalhost,127.0.0.1(but not0.0.0.0) interchangeably in the above.You cannot use the hostnames "mdweb" or "mdapi" from your docker HOST machine - unless you have explicitly setup your DNS to handle these names. However you can use these hostnames if you are inside a docker container on the same docker network.If you provide more information (e.g. yourdocker-compose.yml), we could help you further...
I build a very simple web app and web api on .net core and configured the docker-compose to get them to communicate over the same network correctly.On visual studio, when I hit play on the Docker Compose project, it runs fine, both the web app and the web api work and communicate correctly.On the Docker Desktop app i see them running (green).But when I close/stop the debugger on VS I can't access the websites anymore even though the containers are still running. I thought docker worked as a sort of IIS.Am I misunderstanding the docker capabilities or do I need to run them again from aCLIor publish them somewhere or what? I thought the fact the containers are up and running should mean they're live for me to navigate to.Help me out over here please.
How to access a website running on docker after closing the debug on Visual Studio
After some more reading I finally figured it out.Instead ofdotnet buildyou run:dotnet publishThis will place all files (including dependencies) in apublishfolder. And this folder then can be used directly with amicrosoft/dotnet:-coreimage.
I'm trying to build a .NET Core app docker image. But I can't figure out how I'm supposed to get the project's NuGet dependencies into the image.For simplicity reasons I've create a .NET Core console application:using System; using Newtonsoft.Json; namespace ConsoleCoreTestApp { public class Program { public static void Main(string[] args) { Console.WriteLine($"Hello World: {JsonConvert.False}"); } } }It just has one NuGet dependency onNewtonsoft.Json. When I run the app from Visual Studio, everything works fine.However, when I create a Docker image from the project and try to execute the app from there, it can't find the dependency:# dotnet ConsoleCoreTestApp.dll Error: assembly specified in the dependencies manifest was not found -- package: 'Newtonsoft.Json', version: '9.0.1', path: 'lib/netstandard1.0/Newtonsoft.Json.dll'This is to be expected becauseNewtonsoft.Json.dllis not being copied by Visual Studio to the output folder.Here's theDockerfileI'm using:FROM microsoft/dotnet:1.0.0-core COPY bin/Debug /appIs there a recommended way of dealing with this problem?I don't want to rundotnet restoreinside of the container (as I don't want to re-download all dependencies everytime the container runs).I guess I could add aRUN dotnet restoreentry to theDockerfilebut then I couldn't usemicrosoft/dotnet:-coreas base image anymore.And I couldn't find a way to make Visual Studio copy all dependencies into the output folder (like it does with regular .NET Framework projects).
How to include dependencies in .NET Core app docker image?
The Docker cli uses the Golang CLI managerspf13/cobrato handle its flags such as--entrypoint.Thisis where the entrypoint is extracted:flags.StringVar(&copts.entrypoint, "entrypoint", "", "Overwrite the default ENTRYPOINT of the image")StringVarfrom thespf13/pflaglibrary will only extract the first string after the flag due to how it parses command line arguments. So it won't get all strings after the flag if they're separated by spaces or not enclosed in double quotes". So this seems to be that technical limitation.
I'm trying to understand a discrepancy betweenENTRYPOINTin Dockerfile anddocker run --entrypoint. Theexecform ofENTRYPOINTallows multiple parameters,# Source: https://docs.docker.com/engine/reference/builder/#entrypoint ENTRYPOINT ["executable", "param1", "param2"]butdocker run --entrypoint=executableaccepts only one. Many examples show how to overrideENTRYPOINTwith arguments, but they do so by also specifyingCMD:docker run --entrypoint=executable image:latest param1 param2Is there a technical limitation preventing a directdocker run --entrypointequivalent toENTRYPOINT ["executable", "param1", "param2"]? Docker Compose seems to support it with# Source: https://docs.docker.com/compose/compose-file/compose-file-v3/#entrypoint entrypoint: ["php", "-d", "memory_limit=-1", "vendor/bin/phpunit"]as do other providers which work with Docker (e.g. AWS ECS). Or perhaps, internally,[...entrypoint_args, ...command_args]is actually massaged into[entrypoint, ...command]to make it compatible withdocker run?
Docker run --entrypoint with multiple parameters
I went down the road thatBenjamin W.was talking about with havingVERSIONin my environment vs just in that specific step.This worked for me to set the variable in one step, then use it in separate steps.- name: Set variables run: | VER=$(cat VERSION) echo "VERSION=$VER" >> $GITHUB_ENV - name: Build Docker Image uses: docker/build-push-action@v2 with: context: . file: ${{ env.BASE_DIR }}/Dockerfile load: true tags: | ${{ env.USER }}/${{ env.REPO }}:${{ env.VERSION }} ${{ env.USER }}/${{ env.REPO }}:latest
In my Docker project's repo, I have a VERSION file that contains nothing more than the version number.1.2.3In Travis, I'm able tocatthe file to an environment variable, and use that to tag my build before pushing to Docker Hub.--- env: global: - USER=username - REPO=my_great_project - VERSION=$(cat VERSION)What is the equivalent of that in GitHub Actions? I tried this, but it's not working.name: Test on: ... ... env: USER: username REPO: my_great_project jobs: build_ubuntu: name: Build Ubuntu runs-on: ubuntu-latest env: BASE: ubuntu steps: - name: Check out the codebase uses: actions/checkout@v2 - name: Build the image run: | VERSION=$(cat VERSION) docker build --file ${BASE}/Dockerfile --tag ${USER}/${REPO}:${VERSION} . build_alpine: name: Build Alpine runs-on: ubuntu-latest env: BASE: alpine ... ... ...I've also tried this, which doesn't work.- name: Build the image run: | echo "VERSION=$(cat ./VERSION)" >> $GITHUB_ENV docker build --file ${BASE}/Dockerfile --tag ${USER}/${REPO}:${VERSION} .
GitHub Actions: How to get contents of VERSION file into environment variable?
The reason behind requiring a reverse proxy in front of Artifactory is related to a Docker client limitation - you cannot use a context path when providing the registry path, e.g sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-imagesis not valid.The Docker client assumes you are working with one big registry for all images, while Artifactory allows you to manage multiple registries (repositories) on the same server.To overcome this issue you should setup a reverse proxy which will allow the Docker client to send requests to the root context and forward those requests to the correct repository path in Artifactory. For example, forwarding requests fromsdpvvrwm812.ib.tor.company.com:8888/tosdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-imagesThe Artifactory documentation contains configuration examples forNginX,ApacheandHAProxy.Notice that there are different configurations for Docker registry API v1 and v2.After setting up the reverse proxy, the Docker client should use the proxy in order to access Artifactory.If you are using the --insecure-registry flag there is no need to configure an SSL certificate. With older versions of Docker, before this flag wasintroduced(Docker 1.3.2) it was a mandatory requirement.
I tried to set up artifactory as docker registry as shown in this video:http://www.jfrog.com/video/artifactory-docker-integration/However, I don't have SSL installed in artifactory so I'm using the --insecure-registry flag. (as shown inerror in docker build publish pluginandRemote access to a private docker-registry)Anyway, I don't know how to figure out the artifactory as docker registry url so I can do this: curl -k -uusername:password "http://sdpvvrwm812.ib.tor.company.com:8081/artifactory/api/docker/docker-images"This page,http://www.jfrog.com/confluence/display/RTF/Docker+Repositories, shows at the bottom that something called a reverse proxy might be needed? Is this true and if so how do I install such a thing?
artifactory as docker registry
I also had a similar issue with version23.0.2. For me it was missing buildx package. Following command solved the issue.apt-get install docker-buildx-pluginMessage and docs of docker on linux are a bit vague unfortunately. I found out the missing package while I was planning to install docker from scratch.
When I try to do a Docker Build using Docker as usual, I get the error message in the image and cannot Build. What should I do in this case? By the way, Docker's version is 23.0.1. (https://i.stack.imgur.com/AzgNi.png)(https://i.stack.imgur.com/PIryk.png) (https://i.stack.imgur.com/kMF5Y.png)When I uninstall docker buildx and then Build, I get other warning errors and the Build itself works, but parallel processing cannot be performed. My ideal would be to use Buildx to do a parallel Build.
docker Buildx "ERROR: BuildKit is enabled but the buildx component is missing or broken" error
Are you trying to connect inside the container?If not, you may fight this other unrelated question (covering the outside container case) helpful:From inside of a Docker container, how do I connect to the localhost of the machine?
I start my docker container with:docker run -it --expose 10001 --expose 8080 -p 10001:10001 -p 8080:8080 -p 80:80 --rm lucchi/covid90/100eMy docker -ps then has:CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1521e0c3d947 lucchi/covid90/100e "/bin/sh -c /bin/bash" 2 seconds ago Up Less than a second 0.0.0.0:80->80/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:10001->10001/tcp funny_paniniBut I can't connect to localhost from inside the container. I tried:curl 0.0.0.0:8080 curl 127.0.0.1:8080 curl https://localhost:8080but keep gettingcurl: (7) Failed to connect to localhost port 8080: Connection refusedMost of the asnwers I read are about adding -p to the run command, I don't get what I'm missing.
curl: (7) Failed to connect to localhost port 10001: Connection refused DOCKER
For intellij, it can also be resolved by addingDOCKER_TLS_VERIFYandDOCKER_CERT_PATHto your run/debug configuration as environment variables.The value of each can be empty (depending on your docker setup) so the run/debug configuration shows:DOCKER_TLS_VERIFY=;DOCKER_CERT_PATH=
I went through links likehttps://github.com/docker/compose/issues/3021andhttps://github.com/docker/compose/issues/3937, but still I am facing below error:C:\Users\pc\Downloads\docker-compose-scripts>docker-compose up --d ERROR: TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly. You might need to run `eval "$(docker-machine env default)"`Versions of dockerC:\Users\pc>docker --version Docker version 20.10.6, build 370c289 C:\Users\pc>docker-compose --version docker-compose version 1.29.1, build c34c88b2
ERROR: TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY and DOCKER_CERT_PATH are set correctly. on windows
The exit code6means that "Host public key is unknown.sshpass exits without confirming the new key."So either you populate before that the~/.ssh/known_hostswith the fingerprint of the host, or just ignore the check of the host public key by adding theStrictHostKeyChecking=nooption to the scp.The updated line would look like that:RUN sshpass -p userPassword scp -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r user@server:~/data/* ./
I have the following in my docker file:RUN sudo apt-get install sshpass -y RUN sshpass -p userPassword scp -r user@server:~/data/* ./But when I try and build my image it fails with:Exception caught: The command '/bin/sh -c sshpass -p userPassword scp -r user@server:~/data/* ./' returned a non-zero code: 6 -> [Help 1]However, if I remove these lines, build the image, ssh onto the container and manually run the command from bash it works perfectly.Can anyone tell me how to get around this?
Docker RUN fails with "returned a non-zero code: 6"
Update Jun 2016The answer below is outdated starting with docker 1.10. See this other similar answer for the new solution.https://stackoverflow.com/a/34476794/1556338Old answerCreate a network:$ docker network create --driver bridge my-netReference that network as an environment variable (${NETWORK})in the docker-compose.yml files. Eg:pg: image: postgres:9.4.4 container_name: pg net: ${NETWORK} ports: - "5432" myapp: image: quay.io/myco/myapp container_name: myapp environment: DATABASE_URL: "http://pg:5432" net: ${NETWORK} ports: - "3000:3000"Note thatpginhttp://pg:5432will resolve to the ip address of the pg service (container). No need to hardcode ip addresses; An entry for pg is automatically added to the /etc/host of the myapp container.Call docker-compose, passing it the network you created:$ NETWORK=my-net docker-compose up -d -f docker-compose.yml -f other-compose.ymlI've created abridge networkabove which only works within one node (host). Good for dev. If you need to get two nodes to talk to each other, you need to create anoverlay network. Same principle though. You pass the network name to the docker-compose up command.
This question already has answers here:Communication between multiple docker-compose projects(20 answers)Closed1 year ago.I have a dockerized application with a few services running using docker-compose. I'd like to connect this application with ElasticSearch/Logstash/Kibana (ELK) using another docker-compose application,docker-elk. Both of them are running in the same docker machine in development. In production, that will probably not be the case.How can I configure my application'sdocker-compose.ymlto link to the ELK stack?
Connect two instances of docker-compose [duplicate]
Add an environment variable to yourffsection of the Docker Compose file (and you can remove the link):ff: container_name: ff image: selenium/node-firefox environment: - HUB_PORT_4444_TCP_ADDR=hub expose: - "5555"Compose version 2 uses a different style of networking. From theupgrading guide:environment variables created bylinkshave been deprecated for some time. In the new Docker network system, they have been removed. You should either connect directly to the appropriate hostname or set the relevant environment variable yourself, using the link hostname.From thenetworking documentation:linksare not required to enable services to communicate - by default, any service can reach any other service at that service’s name.The Selenium dockerfile uses version 1 style networking by ENV variable. Here in thecode, if that variable isn't set (which Docker used to do) the entry_point.sh command exits. Providing the variable explicitly solves this.
I have a docker-compose file that I upgraded from version 1 to version 2.It set ups a simple Selenium hub with a firefox node.When I set it up as version 1 it launches fine. When I set it up with version 2 the ff container returns"Not linked with a running Hub container"and exits.As I researched it and understood it , is that the linkage between the containers somehow suffers.Is there a solution ?? Am I missing something ??version: '2' services: hub: container_name: hub image: selenium/hub ports: - "8080:4444" # HOST:CONTAINER expose: - "4444" ff: container_name: ff image: selenium/node-firefox links: - hub expose: - "5555"
Containers are not linked with docker-compose version 2
Compose will create adefaultnetwork for you as long as you use the version 2 format, but if you'd like to customize the networks the docs are here:https://docs.docker.com/compose/compose-file/#network-configuration-referencehttps://docs.docker.com/compose/networking/#specifying-custom-networksYou can create anetworkssection at the top level of the Compose file and reference them in thenetworkssection of each service. But you don't need to, just use the default network as described in the comments below.
Thedocumentation about networking is currentlyvery vague on this ― how do you accomplish adocker-compose.ymlthat createsa virtual network, letting the services (containers) defined by it communicate on that network?Goal in this scenario being not relying on a pre-defined network, for an ensemble of containers defined for docker-compose. Rather have the network definition self-contained in the docker-compose definition file.With a pre-defined network, this below would work if the application inAused the nameBas the hostname for accessing the application packaged insideBlistening on its port 9000. Thehost:portit would use for it would beB:9000(more specifically the urimongodb://B:9000in my particular case).foo: net: my-pre-defined-network container_name: A image: foo bar: net: my-pre-defined-network container_name: B image: bar ports: - "9000:9000"But my point is defining a network inside the docker-compose configuration, not assuming one was a-priori defined...TL;DRA default network is automatically created. See the beginning section ofhttps://docs.docker.com/compose/networking/for how to address containers within this network.
How do you define a network in a version 2 docker-compose definition file?
The following configuration worked for me with nginx version:nginx/1.14.0 (Ubuntu):location = /health { access_log off; add_header 'Content-Type' 'application/json'; return 200 '{"status":"UP"}'; }To test it:Install Nginx locally, example on Ubuntu:https://www.digitalocean.com/community/tutorials/how-to-install-nginx-on-ubuntu-18-04Addlocationconfiguration (mentioned above) to the end ofserverblock in the file/etc/nginx/sites-available/defaultand restart nginx serverAccessinghttp://localhost/healthshould return response{"status":"UP"}
I have a docker container running with nginx server.I want to provide a rest-interface/endpoint to check the health of the server and container. E.g. GEThttp://container.com/health/which delivers "true"/OK or "false"/NOK.What is the simplest and quickist solution or best practice?P.S. The server serves as a file browser, i.e., with enabled Directory Index Listing.
Simple healthcheck endpoint in nginx server container
Try this. It's sorting the problem of many people.cd "C:\Program Files\Docker\Docker" ./DockerCli.exe -SwitchDaemon
I continue to get the following error when trying to start docker on Windows 10 pro. my HyperV is turned on and running: Version 18.04.0-ce-win62 (17151) Channel: edge e0a85f6Any help would be appreciated!Unable to create: The running command stopped because the preference variable "ErrorActionPreference" or common parameter is set to Stop: Hyper-V encountered an error trying to access an object on computer 'C001715587' because the object was not found. The object might have been deleted. Verify that the Virtual Machine Management service on the computer is running. at New-Switch, : line 117 at , : line 394 at Docker.Core.Pipe.NamedPipeClient.Send(String action, Object[] parameters) in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Core\pipe\NamedPipeClient.cs:line 36 at Docker.Actions.DoStart(SynchronizationContext syncCtx, Boolean showWelcomeWindow, Boolean executeAfterStartCleanup) in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Windows\Actions.cs:line 75 at Docker.Actions.<>c__DisplayClass15_0.b__0() in C:\gopath\src\github.com\docker\pinata\win\src\Docker.Windows\Actions.cs:line 59 at Docker.WPF.TaskQueue.<>c__DisplayClass19_0.<.ctor>b__1() in C:\gopath\src\github.com\docker\pinata\win\src\Docker.WPF\TaskQueue.cs:line 59
Can't start docker on windows
In order to set/proc/sys/net/bridge/bridge-nf-call-iptablesby editing/etc/sysctl.conf. There you can add [1]net.bridge.bridge-nf-call-iptables = 1Then executesudo sysctl -pAnd the changes will be applied. With this the pre-flight check should pass.[1]http://wiki.libvirt.org/page/Net.bridge.bridge-nf-call_and_sysctl.conf
Use this guide to install Kubernetes on Vagrant cluster:https://kubernetes.io/docs/getting-started-guides/kubeadm/At(2/4) Initializing your master, there came some errors:[root@localhost ~]# kubeadm init [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.4 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [preflight] Some fatal errors occurred: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1 [preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`I checked the/proc/sys/net/bridge/bridge-nf-call-iptablesfile content, there is only one0in it.At(3/4) Installing a pod network, I downloadedkube-flannelfile:https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlAnd runkubectl apply -f kube-flannel.yml, got error:[root@localhost ~]# kubectl apply -f kube-flannel.yml The connection to the server localhost:8080 was refused - did you specify the right host or port?Until here, I don't know how to goon.MyVagrantfile:# Master Server config.vm.define "master", primary: true do |master| master.vm.network :private_network, ip: "192.168.33.200" master.vm.network :forwarded_port, guest: 22, host: 1234, id: 'ssh' end
Can't install Kubernetes on Vagrant
In fact Docker documentation offers you how to set up a swarm cluster 'manually' without using docker-machine:Create a swarm for development
Currently I have a bunch ofRHEL7 VMsrunning onRackSpaceand want to deploy docker swarm for testing purpose. TheDockerDocs only describes the method to deploy docker swarm by using docker machine.Question:SinceVirtualBoxcannot be used in VMs, are any other ways such that I can directly deploy docker swarm on my VMs without using docker machine?
Deploying docker swarm without using docker machine
Youreallyshouldn't uselatestin production or anything beyond local machine testing/learning. It makes everything ambiguous as to which image you're using, and you can't tell indocker service ls/psif it's current by default and all sorts of other ambiguities (like SHA's not being visible in Docker Hub's GUI).If you have no way around it, at least Swarmtriesto query your registry and check for an updated SHA. If it sees one withdocker service update --image / then it will pull and do a rolling update. You can watchdocker eventsto be sure things are happening, and you can usedocker service ps --no-trunc to check afterward and see the SHA hashes of old and new images.
In the.ymldefinition, I'm always pulling thelatestimage of my service.When I push a new image to the registry and I want to update the image that the service in my stack uses. I don't see any--pullflag, and the documentation fordocker service updatedoesn't explicitly mentions this.How can I re-deploy using the recently pushedlatestimage?
How can I update the latest image that my docker service/stack uses?
So it turns out that the method I listed about with the config.yml was correct. The reason I was seeing a partially successful deployment was because the previously running docker container on the hosts was not being stopped by EB.I think that what was happening was that EB is sending something likesudo docker kill --signal=SIGTERM $CONTAINER_IDinstead of the more commonsudo docker stop $CONTAINER_IDThe specific container I was running didn't respond to SIGTERM and so it would just sit there. When I tested it locally with SIGKILL it would (obviously) stop properly, but SIGTERM alone wouldn't stop it.The issue wasn't the deployment methodology but rather confusion in the output that EB generated and my misinterpretation.
I am running an elasticbeanstalk application, with multiple environments. This particular application is hosting docker containers which host a webservice.To upload and deploy a new version of the application to one of the environments, I can go through the web client and click on "Upload and Deploy" and from the file option I select my latest Dockerrun.aws.json file, which references the latest version of the container that is privately hosted. The upload and deploy works fine and without issue.To make it simpler for myself and others to deploy I'd like to be able to use the CLI to upload and deploy the Dockerrun.aws.json file. If I use the clieb deploycommand without any special configuration the normal process of zipping up the whole application and sending it to the host occurs and fails (it cannot reason out that it only needs to read the Dockerrun.aws.json file).I found a documentation tidbit about controlling what is uploaded using the .elasticbeanstalk/config.yml file.Using this syntax:deploy: artifact: Dockerrun.aws.jsonThe file is uploaded and actually deploys successfully to the first batch of instances, and then always fails to deploy to the second set of instances.The failure error is of the flavor: 'container exited unexpectedly...'Can anyone explain, or provide link to the canonical approach for using the CLI to deploy single docker container applications?
Deploy to elasticbeanstalk via CLI deploy command with Dockerrun.aws.json
I solved it usingwhilp/ssh-agent, though you should note that this isnotusingSSH_AUTH_SOCKdirectly and requires an additional long running container. I'll integrate this approach intodocker-railsfor ease of use.Start a long running containerdocker run -d --name=ssh-agent whilp/ssh-agent:latestAdd your keydocker run --rm --volumes-from=ssh-agent -v ~/.ssh:/ssh -it whilp/ssh-agent:latest ssh-add /ssh/id_rsaList your keysdocker run --rm --volumes-from=ssh-agent -v ~/.ssh:/ssh -it whilp/ssh-agent:latest ssh-add -Lbash into a container and check the key withssh -T[email protected]My yaml looks like:web: build: . working_dir: /project ports: - "3000" environment: # make ssh keys available via ssh forwarding (see volume entry) - SSH_AUTH_SOCK=/ssh-agent/socket volumes_from: # Use configured whilp/ssh-agent long running container for keys - ssh-agent
Could not open a connection to your authentication agent.I am following theapproach of mounting the$SSH_AUTH_SOCKas a volume, but doing so with compose.Setup~/.ssh/configHost * ForwardAgent yesDockerfile:FROM atlashealth/ruby:2.2.2 RUN apt-get update -qq && \ apt-get install -qy build-essential libxml2-dev libxslt1-dev \ g++ qt5-default libqt5webkit5-dev xvfb dbus \ libmysqlclient-dev \ mysql-client openssh-client git && \ # cleanup apt-get clean && \ cd /var/lib/apt/lists && rm -fr *Release* *Sources* *Packages* && \ truncate -s 0 /var/log/*logCompose yaml:web: build: "." environment: - SSH_AUTH_SOCK=/ssh-agent volumes: - "$SSH_AUTH_SOCK:/ssh-agent"NOTE:I have interpolation running on my compose, so$SSH_AUTH_SOCKis substituted with/private/tmp/com.apple.launchd.ZxGtZy6a9w/Listenersfor example.I have forwarding setup on my host OSX properly, it works against another ubuntu host.Rundocker-compose run web bashIn-ContainerWhen I runssh-add -L, it statesCould not open a connection to your authentication agent.When I runssh-agent, it yieldsSSH_AUTH_SOCK=/tmp/ssh-vqjuo7FIfVOL/agent.21; export SSH_AUTH_SOCK; SSH_AGENT_PID=22; export SSH_AGENT_PID; echo Agent pid 22;When I runecho $SSH_AUTH_SOCKfrom bash, it yields/ssh-agentQuestionIt seems that compose is making theSSH_AUTH_SOCKavailable tobash, but it seems that thessh-agentis not getting that sameenv. What am I missing?
SSH Agent forwarding inside docker compose container
If you want to download files from a docker container to your local machine, you can do it with Docker itself (no need for SSH):docker cp :/path/to/file /host/target/pathUpdate: I just read that you are connected to a remote container. Then you can use SCP for that:scp user@host:/path/to/file /local/path
I have a running container on a remote machine. I am connected to the machine via ssh. Now i would like to download a certain file from the container. Can somebody give me some tips how to achieve that ? Thanks. 🙂
Copy file from remote docker container
I found out that thenode's alpine image ships with yarn.Yarnis Facebook's npm replacement and you can use it to globally install npm@5:RUN npm -v RUN yarn global add npm@5 RUN npm -v COPY ./ ./ RUN npm run setup(The version calls are superfluous and only to highlight that the upgrade works.)And now it works:Step 4/9 : RUN npm -v ---> Running in dca435fbec59 4.2.0 ---> f6635e6c92a3 Removing intermediate container dca435fbec59 Step 5/9 : RUN yarn global add npm@5 ---> Running in fac7216ccd91 yarn global v0.24.4 [1/4] Resolving packages... [2/4] Fetching packages... [3/4] Linking dependencies... [4/4] Building fresh packages... success Installed "[email protected]" with binaries: - npm Done in 10.47s. ---> b6b2e0f3fc36 Removing intermediate container fac7216ccd91 Step 6/9 : RUN npm -v ---> Running in 38a9ee95b9f0 5.0.0 ---> d1632fc97b7e Removing intermediate container 38a9ee95b9f0 Step 7/9 : COPY ./ ./ ---> b9b62f53ca48 Removing intermediate container e9dd065c022f Step 8/9 : RUN npm run setup ---> Running in aec36af706d4 >[email protected]setup /usr/hive-updater > npm install --quiet && npm run build added 102 packages in 5.156s >[email protected]build /usr/hive-updater > tscSo if you have npm below version 5 and its upgrade method breaks for you, install yarn to upgrade npm ¯\_(ツ)_/¯Sidenote:It might be better to just use yarn instead of npm@5. It still has a strong performance advantage.Compare these runs, both cached:yarn install v0.24.5 [1/4] Resolving packages... success Already up-to-date. Done in 0.31s.with npm@5:npm install updated 102 packages in 3.069sI didn't know thatyarnwas already shipped with the alpine image.
Locally, I have successfully installed npm@5 via:$ npm install npm@5 -g $ npm -v $ 5.0.0And locally, I can run the npm setup just fine (it's basicallynpm i && tsc)$ npm run setup updated 102 packages in 3.499sYet now I also have a Dockerfile based upon thenode:7.10-alpineimage which breaks if I try to installnpm@5there.My Dockerfile looks like this:FROM node:7.10-alpine WORKDIR /usr/hive-updater/ ENV LAST_UPDATED=2016-12-08 NPM_CONFIG_LOGLEVEL=warn TERM=xterm PATH="$PATH:/usr/hive-updater/node_modules/.bin" RUN npm install npm@5 -g && npm -v COPY ./ ./ RUN npm run setup CMD ["node"]This will fail duringnpm -vwith:module.js:472 throw err; ^ Error: Cannot find module 'semver' at Function.Module._resolveFilename (module.js:470:15) at Function.Module._load (module.js:418:25) at Module.require (module.js:498:17) at require (internal/module.js:20:19) at Object. (/usr/local/lib/node_modules/npm/lib/utils/unsupported.js:2:14) at Module._compile (module.js:571:32) at Object.Module._extensions..js (module.js:580:10) at Module.load (module.js:488:32) at tryModuleLoad (module.js:447:12) at Function.Module._load (module.js:439:3)How to get the latest npm on my docker container?
How to upgrade npm to npm@5 on the latest node docker image?
They have changed the repo for .NET Core 2.1 onwards tomicrosoft/dotnet. Change your FROM statement to reference microsoft/dotnet using the following tags:2.1-sdk2.1-aspnetcore-runtime2.1-runtimeDocumentation on how to upgrade can be foundhere
Created a new .NET CORE 2.1 (preview) web app. Running it in local docker with Linux container I am getting compiler error:Error Building blobtestService 'blobtest' failed to build: manifest for microsoft/aspnetcore:2.1 not found.My dotnetversionC:\WINDOWS\system32>dotnet --version 2.1.300-preview2-008530
AspNetCore:2.1 not found
I'm not sure you can do that exactly. Kubernetes does things quite differently than docker, and isn't really ideal for interacting with the 'host' you are probably used to with docker.A few alternative possibilities come to mind. First, and probably least ideal but closest to what you are asking, would be to add the file after the container is running, either by addingcommandsorargsto the pod spec, or usingkubectl execand echo'ing the contents into the file. Second would be to create a volume where that file already exists, e.g. create a GCE or EBS disk, add that file, and then mount the file location (read-only) in the container's spec. Third, would be to create a new docker image where that file or other code already exists.For the first option, thekubectl execwould be for one-off jobs, it isn't very scalable/repeatable. Any creation/fetching at runtime adds that much overhead to the start time for the container, so I normally go with the third option, building a new docker image whenever the file or code changes. The more you change it, the more you'll probably want a CI system (like drone) to help automate the process.Add a comment if I should expand any of these options with more details.
How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"Thanks
Inject code/files directly into a container in Kubernetes on Google Cloud Engine
As ofDocker v1.11you can filterpsbyvolumes! Unfortunately this is not available inprevious versions.Here's how it would be used:docker ps -f "volume=/var/lib/mysql"ordocker ps -f "volume=centos_db_symfony"
I can list all containers withdocker ps (-a), I can list all volumes withdocker volumes ls, when I inspect avolume, I can see thename,driverandmountpoint, but not the container(s) it is being used by.When I usedocker inspect , I can see the Mount data like this for example:"Mounts": [ { "Name": "centos_db_symfony", "Source": "/var/lib/docker/volumes/centos_db_symfony/_data", "Destination": "/var/lib/mysql", "Driver": "local", "Mode": "rw", "RW": true, "Propagation": "rprivate" },so in theory, I could write a script that loops through all containers to match a specificvolumeby name. But I ran some containers throughdocker-compose, and didn't bind a name (like now is possible in v2) to some volumes, so they show up like a sha256 in thedocker volume lslist, like this:DRIVER VOLUME NAME local 34009871ded5936bae81197247746b708c3ec9e9b9832b702c09736a90...etc local centos_data local centos_db_symfonyIn this case34009871ded5(example) was created before I named the volume indocker-composeandcentos_db_symfonyafter.QuestionWhendocker-compose.ymlvolume information is updated like in this case making the volume named and the information indocker inspect is updated, is the history forever lost or can I find out which container a volume was used by? If so, is it also possible to restore an old volume like this?Extra infodocker-compose version 1.6.0, build d99cad6 Docker version 1.10.2, build c3959b1
How to see which docker volume is or was being used by which container
As the error describes,xargsis not available.Looking at your Dockerfile, the jdk image you use, is based on Oracle LinuxSo you need to add the following line, which installs the required packageRUN microdnf install findutilsFor the Alpine based images the command would be:RUN apk update && apk add findutilsYour Dockerfile should beFROM gradle:7.1.0-jdk11 AS builder WORKDIR /home/gradle/src COPY --chown=gradle:gradle . /home/gradle/src RUN gradle installDist FROM openjdk:17-oracle RUN microdnf install findutils COPY --from=builder /home/gradle/src/build/install/app/ /app/ WORKDIR /app CMD ["bin/app"]
I'm trying to run a hello word java application in docker. The application is produced by gradle init. I use gradle installDist to generate runnable file. I can run the runnable locally without any problem. But I got the error when I try to run from docker. Here is the docker file content:FROM gradle:7.1.0-jdk11 AS builder WORKDIR /home/gradle/src COPY --chown=gradle:gradle . /home/gradle/src RUN gradle installDist FROM openjdk:17-oracle COPY --from=builder /home/gradle/src/build/install/app/ /app/ WORKDIR /app CMD ["bin/app"]The docker file is placed in the same folder with build.gradle and the docker build command is ran from that folder. Build runs successfully. But as long as I click run in the docker GUI, the container fails immediately with error message "xargs is not available"
Got error "xargs is not available" when trying to run a docker image
Kubernetes now has an experimental feature that enables limiting network bandwidth:https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#support-traffic-shapingapiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/ingress-bandwidth: 1M kubernetes.io/egress-bandwidth: 1M
How to limit a container's network usage or bandwidth?I searched the Internet but it seems no existing mature solutions.I can modify the host, but cannot modify the program running in docker or docker itself. It means I can change the configurations, but not thecodeof docker that I need to re-build/re-compile.
How to limit docker image's network usage or bandwidth?
The difference isbuildvs.runThink of images as apps and containers as a process running an app. Running an app does not change the app. Running a container likewise does not change the image. Images are built fromDockerfilesusingdocker buildand are persistent. Containers are created as needed bydocker run,docker-compose, kubernetes, or similar tools from images and are intended to be temporary.TheDockerfileis used by thedocker buildcommand to build a new image. In theDockerfilethe first line usually specifies the base image withFROM, i.e.FROM nginx. SubsequentRUNlines in theDockerfileprovide the additional steps thatdocker buildwill execute in a shell, within the context of theFROMimage, to create the new image. Note that theDockerfiledoes not specify the name of the new image. Instead, the new image is named in the-t some/nameoption todocker buildThedocker-compose.ymlfile specifies a group of images to download and run together as part of a combined service. For example, thedocker-compose.ymlfor a blog could consist of a web server image, an application image, and a database image and would specify not only the images but also possibly how they communicate.Since docker builds and docker compose are separate operations, there is no conflict or detection of differences. Thedocker-compose.ymlcontrols what is going to be download and run, and you can also build whatever you like.Also, as @David Maze mentioned in comments:If you use both options then Docker Compose will build the image as specified and then tag it using the image: name; this can be confusing if you're putting a "standard" image name there.My guess is if you do that, you might end up with an image, saynginxon your own machine that does not match the Dockerhub image. Don't do that. Instead, use unique names for any images that you build.
I am currently in the process of learning Docker. After reading the docs and a few articles I obviously have more questions than answers. Most intriguing one for me at the moment is: what is the difference betweenFROM some:docker-imageIn Dockerfile andimage: digitalocean.com/phpIn docker-compose.ymlI do understand that they should grab the image and create a container from it. What I don't understand is what happens if we will specify both at the same time, for example:version: '3' services: #PHP Service app: build: context: . dockerfile: Dockerfile image: digitalocean.com/phpBoth docker-compose.yml and Dockerfile have images specified in them. What happens when those images are different? Will docker-compose.yml always win and for that one service? Will it use only this 'top' image? Will they overlap somehow? Or maybe I got it all wrong?I did seethisbut I am still not sure if I understand what is going on.
Dockerfile FROM vs Docker-compose IMAGE
The manager node doesn't share out the local images from itself. You need to spin up a registry server (or user hub.docker.com). The effort needed for that isn't very significant:# first create a user, updating $user for your environment: if [ ! -d "auth" ]; then mkdir -p auth fi touch auth/htpasswd chmod 666 auth/htpasswd docker run --rm -it \ -v `pwd`/auth:/auth \ --entrypoint htpasswd registry:2 -B /auth/htpasswd $user chmod 444 auth/htpasswd # then spin up the registry service listening on port 5000 docker run -d -p 5000:5000 --restart=always --name registry \ -v `pwd`/auth/htpasswd:/auth/htpasswd:ro \ -v `pwd`/registry:/var/lib/registry \ -e "REGISTRY_AUTH=htpasswd" \ -e "REGISTRY_AUTH_HTPASSWD_REALM=Local Registry" \ -e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \ -e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \ registry:2 # then push your image docker login localhost:5000 docker tag my-customized-image localhost:5000/my-customized-image docker push localhost:5000/my-customized-image # then spin up the service with the new image name # replace registryhost with ip/hostname of your registry Docker host docker service create --name custom --network my-network \ --constraint node.labels.myconstraint==true --with-registry-auth \ registryhost:5000/my-customized-image
Maybe I missed something, but I made a local docker image. I have a 3 node swarm up and running. Two workers and one manager. I use labels as a constraint. When I launch a service to one of the workers via the constraint it works perfectly if that image is public.That is, if I do:docker service create --name redis --network my-network --constraint node.labels.myconstraint==true redis:3.0.7-alpineThen the redis service is sent to one of the worker nodes and is fully functional. Likewise, if I run my locally built image WITHOUT the constraint, since my manager is also a worker, it gets scheduled to the manager and runs perfectly well. However, when I add the constraint it fails on the worker node, fromdocker service ps 2l30ib72y65hI see:... Shutdown Rejected 14 seconds ago "No such image: my-customized-image"Is there a way to make the workers have access to the local images on the manager node of the swarm? Does it use a specific port that might not be open? If not, what am I supposed to do - run a local repository?
Docker: Swarm worker nodes not finding locally built image
Your link is working, but you're on separate networks inside of Docker. From thedocker-compose.yml docs:Note: If you’re using the version 2 file format, the externally-created containers must be connected to at least one of the same networks as the service which is linking to them.To solve this, you can create your own network:docker network create dbnet docker network connect dbnet mysqlThen configure your docker-compose.yml with:version: '2' networks: dbnet: external: name: dbnet services: wordpress: image: wordpress ports: - 80:80 environment: WORDPRESS_DB_USER: root WORDPRESS_DB_PASSWORD: password volumes: - /var/www/somesite.com:/var/www/html networks: - dbnetNote with recent versions of Docker, you shouldn't need to link the containers, the DNS service should do the name resolution for you.
I have started a docker container with the following commanddocker run --name mysql --restart always -p 3306:3306 -v /var/lib/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=password -d mysql:5.7.14and then would like to connect a wordpress site with the following docker-compose.yml fileversion: '2' services: wordpress: image: wordpress external_links: - mysql:mysql ports: - 80:80 environment: WORDPRESS_DB_USER: root WORDPRESS_DB_PASSWORD: password volumes: - /var/www/somesite.com:/var/www/htmlBut I keep getting the following errorStarting somesitecom_wordpress_1 Attaching to somesitecom_wordpress_1 wordpress_1 | wordpress_1 | Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 19 wordpress_1 | wordpress_1 | MySQL Connection Error: (2002) Connection refusedIt seems like theexternal_linksisn't working.Any idea what I am doing wrong?
docker-compose + external container
docker-py(https://github.com/docker/docker-py) should be used to control Docker via Python.This will start an Ubuntu container runningsleep infinity.import docker client = docker.from_env() client.containers.run("ubuntu:latest", "sleep infinity", detach=True)Have a look athttps://docker-py.readthedocs.io/en/stable/containers.htmlfor more details (capabilities, volumes, ..).
I would like to start the docker container from a python script. When i call the docker image through my code , i am unable to start the docker containerimport subprocess import docker from subprocess import Popen, PIPE def kill_and_remove(ctr_name): for action in ('kill', 'rm'): p = Popen('docker %s %s' % (action, ctr_name), shell=True, stdout=PIPE, stderr=PIPE) if p.wait() != 0: raise RuntimeError(p.stderr.read()) def execute(): ctr_name = 'sml/tools:8' # docker image file name p = Popen(['docker', 'run', '-v','/lib/modules:/lib/modules', '--cap-add','NET_ADMIN','--name','o-9000','--restart', 'always', ctr_name ,'startup',' --base-port', 9000,' --orchestrator-integration-license', ' --orchestrator-integration-license','jaVl7qdgLyxo6WRY5ykUTWNRl7Y8IzJxhRjEUpKCC9Q=' ,'--orchestrator-integration-mode'], stdin=PIPE) out = p.stdin.write('Something') if p.wait() == -20: # Happens on timeout kill_and_remove(ctr_name) return outfollowing are docker container details for your referencedev@dev-VirtualBox:sudo docker ps -a [sudo] password for dev: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 79b3b9d215f3 sml/tools:8 "/home/loadtest/st..." 46 hours ago Up 46 hours pcap_replay_192.168.212.131_9000_delay_dirty_1Could some one suggest me why i could not start my container through my program
Starting docker container using python script
I came up with a work-around for the situation with suggestions from these sources:https://github.com/docker/machine/issues/1799https://github.com/docker/machine/issues/1872I logged into the Minikube VM (minikube ssh), and edited the/usr/local/etc/ssl/certs/ca-certificates.crtfile by appending my own ca cert.I then restarted the docker daemon while still within the VM:sudo /etc/init.d/docker restartThis is not very elegant in that if I restart the Minikube VM, I need to repeat these manual steps each time.As an alternative, I also attempted to set the--insecure-registry myurl.com:5000option in theDOCKER_OPTSenvironment variable (restarted docker), but this didn't work for me.
I am attempting to use Minikube for local kubernetes development. I have set up my docker environment to use the docker daemon running in the provided Minikube VM (boot2docker) as suggested:eval $(minikube docker-env)It sets up these environment variables:export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://192.168.99.100:2376" export DOCKER_CERT_PATH="/home/jasonwhite/.minikube/certs"When I attempt to pull an image from our private docker repository:docker pull oururl.com:5000/myimage:v1I get this error:Error response from daemon: Get https://oururl.com:5000/v1/_ping: x509: certificate signed by unknown authorityIt appears I need to add a trusted ca root certificate somehow, but have been unsuccessful so far in my attempts.I can hit the repository fine with curl using our ca root cert:curl --cacert /etc/ssl/ca/ca.pem https://oururl.com:5000/v1/_ping
Can not pull docker image from private repo when using Minikube
HOSTYou won't be able to connect to the other container withlocalhost(aslocalhostis the current container) but you can connect via the container host (the host that is running your container). In your case you need boot2docker VM IP (echo $(boot2docker ip)). For this to work, you need to expose your port at the host level (which you are doing with-p 1337:1337).LINKAnother solution that is most common and that I prefer when possible, is to link the containers.You need to add the --name flag to the serverdocker runcommand:--name sails_serverYou need to add the --link flag to the applicationdocker runcommand:--link sails_server:sails_serverAnd inside your application, you will be able to access the server atsail_server:1337You could also use environment variables to get the server IP. See documentation:https://docs.docker.com/userguide/dockerlinks/BONUS: DOCKER-COMPOSEYour run commands may start to be a bit long... in this case I like to usedocker-composethat allows me to define my containers and their relationships (volumes, names, link, commands...) in one file.
I have two services running in separate containers, one is grunt(application) and runs off port 9000 and the other is sails.js (server) which runs off port 1337. What I want to try to do is have the client app connect with the server through localhost:1337. Is this feasible? Thanks.
How to connect two docker containers through localhost?
According tothe documentation, if you pass an archive file from the local filesystem (not a URL) to ADD in the Dockerfile (with a destination path, not a path + filename), it will uncompress the file into the directory given.If is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources from remote URLs are not decompressed. When a directory is copied or unpacked, it has the same behavior as tar -x: the result is the union of:1) Whatever existed at the destination path and 2) The contents of the source tree, with conflicts resolved in favor of "2." on a file-by-file basis.try:ADD /files/apache-stratos.zip /opt/and see if the files are there, without further decompression.
I have a ~300Mb zipped local file that I add to a docker image. The next state then extracts the image.The problem is that the ADD statement results in a commit that results in a new file system layer makes the image ~300Mb larger than it needs to be.ADD /files/apache-stratos.zip /opt/apache-stratos.zip RUN unzip -q apache-stratos.zip && \ rm apache-stratos.zip && \ mv apache-stratos-* apache-stratosQuestion: Is there a work-around to ADD local files without causing a commit?One option is to run a simple web server (e.g.python -m SimpleHTTPServer) before starting the docker build, and then usingwgetto retrieve the file, but that seems a bit messy:RUN wget http://localhost:8000/apache-stratos.zip && \ unzip -q apache-stratos.zip && \ rm apache-stratos.zip && \ mv apache-stratos-* apache-stratosAnother option is to extract the zipped file at container start up instead of build time, but I would prefer to keep the start up as quick as possible.
Docker how to ADD a file without committing it to an image?
Making a few changes worked for me -Addcluster.initial_master_nodesto the elasticsearch service in compose -environment: - cluster.initial_master_nodes=elasticsearchvm.max_map_counton the linux box kernel setting needs to be set to at least 262144 -$ sudo sysctl -w vm.max_map_count=262144For development mode, you can use below settings as well -environment: - discovery.type=single-nodeWorking compose file for me -version: '2.2' services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:7.0.1 container_name: es01 environment: - cluster.initial_master_nodes=es01 ulimits: memlock: soft: -1 hard: -1 ports: - 9200For production mode, you must consider having multiple ES nodes/containers as suggested in the official documentationhttps://www.elastic.co/guide/en/elasticsearch/reference/7.0/docker.html#docker-cli-run-prod-mode
I am using Docker Desktop with linux containers on Windows 10 and would like to launch the latest versions of the elasticsearch and kibana containers over a docker compose file.Everything works fine when using some older version like 6.2.4.This is the working docker-compose.yml file for 6.2.4.version: '3.1' services: elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4 container_name: elasticsearch ports: - "9200:9200" volumes: - elasticsearch-data:/usr/share/elasticsearch/data networks: - docker-network kibana: image: docker.elastic.co/kibana/kibana:6.2.4 container_name: kibana ports: - "5601:5601" depends_on: - elasticsearch networks: - docker-network networks: docker-network: driver: bridge volumes: elasticsearch-data:I deleted all installed docker containers and adapted the docker-compose.yml file by changing 6.2.4 to 7.0.1. By starting the new compose file everything looks fine, both the elasticsearch and kibana containers are started. But after a couple of seconds the elasticsearch container exits (the kibana container is running further). I restarted everything, attached a terminal to the elasticsearch container and saw the following error message:... ERROR: [1] bootstrap checks failed [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured ...What must be changed in the docker-compose.yml file to get elasticsearch 7.0.1 working?
docker-compose.yml for elasticsearch 7.0.1 and kibana 7.0.1
After some research, I found out that thedocker logs -tcommand prints out timestamps in UTC and there is no config to change that. However, you could use a little script referenced inhttps://github.com/docker/cli/issues/604, where you could just pipe the output and change the given timestamp.
My local timezone and docker container's timezone are all set to 'GMT+8:00'. But the 'docker logs -t' still shows timestamp of 'GMT+0:00'.the picture below is a part of output of 'docker logs -t'. The left timestamp is printed by docker, and the right timestamp is printed by application in container.
how to set timezone of 'docker logs -t'?
YourDockerfileis fine, andCOPY package*.json ./is not necessary - its being copied with your entire app.The problem is in yourdocker-composefile.you defined:build: context: . dockerfile: ClientApp/Dockerfilethat means, yourDockerfilewill accept the docker-compose context, which is 1 directory above:├── docker-compose.yml -- this is the actual context └── ClientApp ├── Dockerfile -- this is the expected contextso when the CMD is running, it is 1 directory above thepackage.json, therefore it cannot run the command and the container exits.to fix it, give yourdocker-composefile the correct context:build: context: ClientApp dockerfile: Dockerfileand it should work.
I am trying to wire up my Angular 7 client app into docker compose locally.I get the following error when I docker-compose up:client-app | npm ERR! errno -2 client-app | npm ERR! syscall open client-app | npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json'Dockerfile:FROM node:9.6.1 RUN mkdir -p /app WORKDIR /app EXPOSE 4200 ENV PATH /app/node_modules/.bin:$PATH COPY . /app RUN npm install --silent RUN npm rebuild node-sass CMD ["npm", "run", "docker-start"]The compose part for the client is:client-app: image: ${DOCKER_REGISTRY-}client container_name: client-app ports: - "4200:81" build: context: . dockerfile: ClientApp/Dockerfilepackage.json is in the ClientApp folder along-side Dockerfile, I would assume COPY . /app should copy the package.json to the container. I don't have any excludes in dockerignore. I am using Docker for Windows with Unix containers. I tried npm init before (but that will create an empty package.json anyway) and looked through the SO posts but most of the dockerfile definitions look exactly the same. I also tried: COPY package*.json ./ additionally and building the image with --no-cache.
npm can't find package.json when running docker container with compose
Here's how to update an existing image withdocker commit.launch a container with the image you want to modify:docker run -t -i IMAGE /bin/bashNote that you'll probably want to access some host files/directory to import changes in the container:docker run -t -i -v /host/location:/mnt/share IMAGE /bin/bashThen quit with Ctrl-D orexit.If you want to automate this in a script, you'll need to get the container id for the next step. And you'll want to issue directly commands instead of calling an interactive session of bash:container_id=$(docker run -d -v /host/location:/mnt/share IMAGE /bin/bash -c " ## any bash code rsync -av --delete --exclude .git /mnt/share /my/app/ cd /my/app ./autogen.sh ")Commit your modified container filesystem as a new image:docker commit CONTAINER_ID IMAGE_NAMENote: you could want to use the same IMAGE_NAME than the one you've first launched the container with. This will effectively update your image.Additional concerns:Any modification carried on a previous image should try to minimize the new layer created upon last image. Rules probably depends if you are using BTRFS (block level modifications will actually be in the 'layer'), or AUFS (file level modifications). The best would be to avoid replacing whole source files with the same files (avoidcp -a,git checkout-index, favorrsyncorgit checkout).You'll need to install some tools on your VM so that you can make your updates (probablygit,rsync...). But don't forget you could also provide scripts (or even complete tools), thanks to the mounted host volume.The created image is not orthodox and does not come from aDockerfile. You should probably rebuild a full new image quite regularly from an officialDockerfile. Or at least try to minimize layering by having all your images based directly on one official image.
I want to leverage caching/layering of docker images to save bandwidth, disk space, and time spent.Let say:I've a web-app docker image installed and deployed into several docker hosts.The docker image contains source code of my web app.I worked on the code, and now have a new version of the code.How should I automate the creation ofa new docker commit above last imagecontaining only the bugfix ?My goal is that only the small bugfix diff will be required to download to get the new images for docker hosts that already downloaded previous image.This is the sate of my current reflexion about it:I'll probably end usingdocker commitsomehow to save update in the image.Buthow can I access the image content ?And even then, how would I import my changes without cluttering the original docker images with various tools (git and shell scripts) that have nothing to do with serving the web app ?.I've looked at volumes to share the code with another docker that would take care of updating it. But volumes don't get committed.Thanks for any insight on how to achieve this !EDIT: Using multiple Dockerfile seems another way to do this, thxhttp://jpetazzo.github.io/2013/12/01/docker-python-pip-requirements/for the similar concerns. It seems I'll need to generate my dockerfiles on the fly.
Updating docker images with small changes using commits
It’s inside the virtual machine and isn’t directly accessible from the host.Debug-level commands likedocker volume inspectwill give you a path, but they really are only for emergency debugging and not for routine use. If you have a way to get a shell in the VM you can see that path, but you really shouldn’t be directly accessing files there, and you shouldn’t be routinelydocker inspecting anything.
Need to know where docker volumes are located when using the docker machine on macOS. The installation is using boot2docker, so the VM works behind.Example:docker volume create test-datadocker inspect shows a path, but where can I find the specific (physical) location?
Where docker volumes are located?
Swarm is a very simple add-on to Docker. It currently does not provide all the features of Kubernetes. It is currently hard to predict how the ecosystem of these tools will play out, it's possible that Kubernetes will make use of Swarm.
From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
What is the difference between Docker Swarm and Kubernetes/Mesophere?
Yes, you often need to add extra resource files like certificates especially when using a minimal distribution like alpine but the fact that you can run go applications on such small distributions is often also seen as an advantage.To add the certificates this is a really good explanation outlining how to do it on a scratch container:https://blog.codeship.com/building-minimal-docker-containers-for-go-applications/If you would rather stick with alpine then you can install this package to get them:https://pkgs.alpinelinux.org/package/v3.7/main/x86/ca-certificates
I want to build a Go 1.9.2 binary and run it on the Docker Alpine image. The Go code I wrote doesn't call any C code. It also uses thenetpackage. Unfortunately it hasn't been as simple as it sounds as Go doesn't seem to quite build static binaries all the time. When I try to execute the binary I often get cryptic messages for why the binary didn't execute. There's quite a bit of information on the internet about this but most of it ends up with people using trial an error to make their binaries work.So far I have found the following works, however I don't know why, if it is optimal or if it could be simplified.env GOOS=linux GARCH=amd64 go install -v -a -tags netgo -installsuffix netgo -ldflags "-linkmode external -extldflags -static"What is the canonical way (if it exists) to build a Go binary that will run on the Alpine 3.7 docker image? I am happy to useapkto install packages to the Alpine image if that would make things more efficient/easier. (Believe I need to installca-certificatesanyway.)
How do I build a static Go binary for the Docker Alpine image?
You can achieve that by usingDocker Remote API.First of all adjust how docker daemon is running. Configure it to listen to HTTP requests on port 4243 in addition to the default unix socket:sudo sh -c "echo 'DOCKER_OPTS=\"-H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock\"' > /etc/default/docker"Now, you can use the/containers/createendpoint to create a container without running it:curl -X POST -H "Content-Type: application/json" http://localhost:4243/containers/create?name=my_first_container -d ' { "Name": "dtest2", "AttachStdin": "false", "AttachStdout": "false", "AttachStderr": "false", "Tty": "false", "OpenStdin": "false", "StdinOnce": "false", "Cmd":["/bin/bash", "-c", "echo Starting;sleep 20;echo Stopping"], "Image": "ubuntu", "DisableNetwork": "false" } 'Pay attention to the?name=my_first_containerparameter I added to the curl request url. This is how you name your container.Side note- The same can be achieved without adding the HTTP interface, however it seems easier to show the solution using a simple curl POST request.
As part of my deployment strategy, I am managing Docker containers with Upstart.To do that, I need to pull an image from a registry and create a named container (as suggested onUpstart script to run container won't manage lifecycle)Is there a way to create the container without first running the image? I don't want to have to start a container (which may introduce side effects), stop it, and then manage elsewhere.For example, something like:docker.io create -e ENV1=a -e ENV2=b -p 80:80 --name my_first_container sample/containe
Create Docker container from image without starting it
Thanks to @stdunbar in the comments for pointing me in the right direction.EDIT: Thanks to @Imran in the comments. If you start lots of threads, they will absolutely be scheduled to multiple cores. This answer is only about gettingRuntime.getRuntime().availableProcessors()to return the right value. Many "thread pools" start as many threads as that method returns: it should return the number of cores available.There seem to be two main solutions, neither of which is ideal:Set thecpuparameter in the task definition. For example, if you have 2 cores and want to use them both you have to set"cpu":2048in the task's definition. This isn't very convenient for two reasons:If you choose a bigger instance, you have to make sure to update this parameter.If you want to have two tasks running simultaneously, both of which can sporadically use all cores for short-term activities, AWS will not schedule two tasks on a 2-core system with"cpu":2048. It says the VM is "full" from a CPU perspective. This goes against the timesharing (Unix etc.) philosophy of every task taking what it needs (for example, imagine on a desktop PC, if you run Word and Excel on a dual-core computer, and Windows wouldn't allow you to start any other tasks, on the grounds that Wordmightneed all of one core, and Excelmightdo too, so if another programmightneed all the core at the same time, there wouldn't be enough cores.)Use the-XX:ActiveProcessorCount=xxJVM option in JDK 10 onwards, as describedhere. This isn't convenient because:As above, you have to change the value if you change your instance type.I wrote a longer blog post describing my findings here:https://www.databasesandlife.com/java-docker-aws-ecs-multicore/
I am running a task via Docker on AWS's ECS. The task does some calculations which are CPU-bound, which I would like to run in parallel. I start a thread pool with the number of threads specified inRuntime.getRuntime().availableProcessors()which works fine locally on my PC. For some reason, on AWS ECS, this always returns 1, even though there are multiple cores available. Therefore my calculations run serially, and do not utilize the multiple cores.For example, right now, I have a task running on a "t3.medium" instance which should have 2 cores according to thedocs.When I execute the following code:System.out.println("Java reports " + Runtime.getRuntime().availableProcessors() + " cores");Then the following gets displayed on the log:Java reports 1 coresI do not specify thecpuparameter in ECS's task definition. I see that in the list of tasks within the ECS Management Console it has a column for "CPU" which reads 0 for my task. I also notice that in the list of instances (= VMs) it lists "CPU available" as 2048 which presumably has something to do with the fact the VM has 2 cores.I would like my Java program to see all cores that the VM has to offer. (As would normally be the case when a Java program runs on a computer without Docker).How do I go about doing that?
Runtime.getRuntime().availableProcessors() returning 1 even though many cores available on ECS AWS
We can create Docker image without Docker being installed.Jib Maven and Gradle PluginsGoogle has an open source tool called Jib that is relatively new, but quite interesting for a number of reasons. Probably the most interesting thing is that you don’t need docker to run it - it builds the image using the same standard output as you get from docker build but doesn’t use docker unless you ask it to - so it works in environments where docker is not installed (not uncommon in build servers). You also don’t need a Dockerfile (it would be ignored anyway), or anything in your pom.xml to get an image built in Maven (Gradle would require you to at least install the plugin in build.gradle).Another interesting feature of Jib is that it is opinionated about layers, and it optimizes them in a slightly different way than the multi- layer Dockerfile created above. Just like in the fat jar, Jib separates local application resources from dependencies, but it goes a step further and also puts snapshot dependencies into a separate layer, since they are more likely to change. There are configuration options for customizing the layout further.Pls refer this linkhttps://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-betterFor example with Spring Boot referhttps://spring.io/blog/2018/11/08/spring-boot-in-a-container
Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.UPDATEThere are some, not really up to date, maven plugins for virtual machine provisioningwith vagrantorwith vbox. I have found article about buildingdocker images without docker on baselSo far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
Build docker image without docker installed
Looking athttps://registry.hub.docker.com/u/library/jenkins/, it seems that /var/jenkins_home is a volume. You can only create files there while the container is running, presumably with a volume mapping likedocker run ... -v /your/jenkins/home:/var/jenkins_home ...The docker build process knows nothing about shared volumes.
I'm new to Docker and try to build an image with a simple Dockerfile:FROM jenkins USER root RUN mkdir -pv /home/a/b RUN touch /home/a/b/test.txt RUN mkdir -pv /var/jenkins_home/a/b RUN touch /var/jenkins_home/a/b/test.txt USER jenkinsWhen I build it, it fails with the following output:Step 0 : FROM jenkins Step 1 : USER root Step 2 : RUN mkdir -pv /home/a/b mkdir: created directory '/home/a' mkdir: created directory '/home/a/b' Step 3 : RUN touch /home/a/b/test.txt Step 4 : RUN mkdir -pv /var/jenkins_home/a/b mkdir: created directory '/var/jenkins_home/a' mkdir: created directory '/var/jenkins_home/a/b' Step 5 : RUN touch /var/jenkins_home/a/b/test.txt touch: cannot touch '/var/jenkins_home/a/b/test.txt': No such file or directoryCan anyone tell me, what I am missing here? Why does the first mkdir & touch combination work and the second does not?
Building Dockerfile fails when touching a file after a mkdir
I've handled this by using thecomposer docker imageto install the dependencies.Clone the repo and then run the following command from within the root directory.docker run --rm --interactive --tty -v $(pwd):/app composer installBy mounting your repository into the container, the composer container will write thevendordirectory and it will appear in your host.
I currently have my own Laravel application running on Docker using Laravel sail on Windows 11 using Ubuntu on WSL2. This works fine and as intended. I've pushed my work onto a Git repository, but how would I be able to pull this onto a new system? The vendor files that come with Laravel sail when you install won't be sent to the repository, so sail will be useless until composer's vendors files are installed.I'm new to Docker, would this mean I would have to install composer and PHP on Linux (WSL2) and then install the vendor files? Is there any easier method to this, or is this the conventional way?Thank you for any help.
Laravel Sail after cloning from Git repository
OS X does not use the Linux kernel, so it cannot run in a Docker containerXCode is not open-sourced and does not have a Linux installer, so it cannot be used in a Linux Docker image.It seems like your best bet is to build a Packer template using something likepacker-macososx-vm-templatesand integrate that into your pipeline.
For CI purposes I have a need to set up a cluster of build slaves capable of building iOS apps. For now I'm relying on a single MacMini -with the aim to deploy several more in the future- and I'd like to virtualize several slaves on top of it. Some of these virtual slaves will build the iOS app, others will be smaller Linux slaves for miscellaneous purposes.I'm completely new to Docker, so my main question is whether it's possible to dockerize Xcode 9.2 and/or MacOS in order to virtualize my iOS build slaves. I've seen very little literature out there on whether this can be achieved and I've found some images in hub.docker.com but they're not documented and don't appear to be very popular.I'm going through a Docker tutorial right now and eventually will be attempting this -and if I'm successful I'll be answering my own question here for the benefit of others- but given the lack of information I have doubts on whether it is even possible or where I should even start.Any tips or pointers on this would be greatly appreciated. Or if anyone knows for fact that this is not possible and can explain why, that would also save me a lot of time.
How to dockerize Xcode
Maybe you can try this:Before you callRUN,ADDthe .env file into the imageADD proxies.env proxies.envthen prefix your RUN statement:RUN export `cat proxies.env` && echo "FOO is $FOO and BAR is $BAR"This produces the following output:root@armenubuntudev:~/Dockers/set-env# docker build -t ashimoon/envtest . Sending build context to Docker daemon 3.584 kB Sending build context to Docker daemon Step 0 : FROM ubuntu ---> 91e54dfb1179 Step 1 : ADD proxies.env proxies.env ---> Using cache ---> 181d0e082e65 Step 2 : RUN export `cat proxies.env` && echo "FOO is $FOO and BAR is $BAR" ---> Running in 30426910a450 FOO is 1 and BAR is 2 ---> 5d88fcac522c Removing intermediate container 30426910a450 Successfully built 5d88fcac522c
I have a Dockerised application which I would like to run in both proxy and non-proxy host environments. I'm trying to resolve this problem by copying the normal environment variables, such as http_proxy, into the containers if and only if they exist in the host.I can get 90% of the way there by runningset | grep -i _proxy=>proxies.envin a top-level script, and then having, in my docker-compose.yml:myserver: build: ./myserver env_file: - proxies.envThis copies the host's environmental proxy variables, if any, into the server container, and it works in the sense that these variables are available at container run time, in other words by the stage that the Dockerfile CMD or ENTRYPOINT executes.However I have one container which needs to run npm as a build step, ie from a RUN command in the Dockerfile, and these variables appear not to be present at this stage, so npm can't find the proxy and hangs. In other works, if I haveRUN setin my Dockerfile, I can't see any variables from proxies.env, but if I dodocker exec -it myserver /bin/bashand then run set, I can see everything from proxies.env.Can anyone recommend a way to make these variables visible at container build time, without having to hard-code them, so that my docker-compose.yml and Dockerfile will still work both for hosts with proxies and hosts without proxies?(Running with centos 7, docker-compose 1.3.1 and docker 1.7.0)
How to make environmental variables available to Docker RUN commands from docker-compose?
Something in Ansible appears to be recognizing that as valid Python, so it's getting transformed into a Python list and then serialized using Python'sstr(), which is why you end up with the single-quoted values.An easy way to work around this is to stick a space at the beginning of the value, which seems to prevent it from getting converted into Python:- name: Build docker image hosts: localhost vars: - somevar: whatever - image_tag: "blabla/booboo" - docker_copy_files: [] - docker_file_content: - instruction: CMD value: ' ["/usr/bin/runit", "{{somevar}}"]' roles: - peruncs.dockerThis results in:CMD ["/usr/bin/runit", "whatever"]
I am trying to generate Dockerfiles with Ansible template - see the role source and the template in Ansible Galaxy andGithubI need to genarate a standard Dockerfile line like:... VOLUME ["/etc/postgresql/9.4"] ...However, when I put this in the input file:... instruction: CMD value: "[\"/etc/postgresql/{{postgresql_version}}\"]" ...It ends up rendered like:... VOLUME ['/etc/postgresql/9.4'] ...and I lose the " (which renders the Dockerfiles useless)Any help ? How can I convince Jinja to not substitute " with ' ? I tried\",|safefilter, even{% raw %}- it just keeps doing it!Update:Here is how to reproduce the issue:Go get theperuncs.dockerrole fromgalaxy.ansible.comor Github (link is given above) Write up a simple playbook (saydemo.yml) with the below content and run:ansible-playbook -v demo.yml. The-voption will allow you to see the temp directory where the generatedDockerfilegoes with the broken content, so you can examine it. Generating the docker image is not important to succeed, just try to get the Dockerfile right.- name: Build docker image hosts: localhost vars: - somevar: whatever - image_tag: "blabla/booboo" - docker_copy_files: [] - docker_file_content: - instruction: CMD value: '["/usr/bin/runit", "{{somevar}}"]' roles: - peruncs.dockerThanks in advance!
why ansible always replaces double quotes with single quotes in templates?
AddUSER rootto yourDockerfile:FROM mcr.microsoft.com/mssql/server:2019-latest USER root SHELL ["/bin/bash", "-c"] COPY ./CompanyCert.crt /usr/local/share/ca-certificates/CompanyCert.crt RUN update-ca-certificates
I ran this command:docker pull mcr.microsoft.com/mssql/server:2019-latestI then made a dockerfile to use this container image as a base image for another container# escape=` FROM mcr.microsoft.com/mssql/server:2019-latest SHELL ["/bin/bash", "-c"] COPY ./CompanyCert.crt /usr/local/share/ca-certificates/CompanyCert.crt RUN update-ca-certificatesWhen I try to build that docker file, I get this error:ln: failed to create symbolic link '/etc/ssl/certs/CompanyCert.pem': Permission deniedSo I added aRUN whoamito my docker file and it returnsmssql. When I runid -uit returns 10001. So it seems that the usermssqldoes not have root permissions.I tried putting sudo in front of my call toupdate-ca-certificatesbut it says:/bin/bash: sudo: command not foundI tried toRUN su -and that returns:su: must be run from a terminalI have successfully used the above dockerfile to install my company certificates on other containers from Microsoft, but it is failing spectacularly this time.How can I get root access so I can install my company certificate on this SQL Server Container?
Switch to Root User in a Dockerfile
Uselinkswhen you want to link together containers within the same docker-compose.yml. All you need to do is set the link to the service name. Like this:--- elasticsearch: image: elasticsearch:latest command: elasticsearch -Des.network.host=0.0.0.0 ports: - "9200:9200" logstash: image: logstash:latest command: logstash -f logstash.conf ports: - "5000:5000" links: - elasticsearchIf you want to link a container inside of the docker-compose.yml to another container that was not included in the same docker-compose.yml or started in a different manner then you can useexternal_linksand you would set the link to the container's name. Like this:--- logstash: image: logstash:latest command: logstash -f logstash.conf ports: - "5000:5000" external_links: - my_elasticsearch_containerI would suggest the first way unless your use case for some reason requires that they cannot be in the same docker-compose.yml
I believe it is simple question but I still do not get it from Docker-compose documentations. What is the difference between links and external_links?I like external_links as I want to have core docker-compose and I want to extend it without overriding the core links.What exactly I have, I am trying to setup logstash which depends on the elasticsearch. Elasticsearch is in the core docker-compose and the logstash is in the depending one. So I had to define the elastic search in the depended docker-compose as a reference as logstash need it as a link. BUT Elasticsearch has already its own links which I do not want to repeat them in the dependent one.Can I do that with external_link instead of link?I know that links will make sure that the link is up first before linking, does the external_link will do the same?Any help is appreciated. Thanks.
Docker-compose links vs external_links
It is able to create a docker container from TFS and integrate with build/Release pipeline. Some tutorials for this area:Continuous Deployment with Docker and Build vNextUsing docker on Windows in VSTS build and release managementHowever it's not possible to build a Windows 7 Docker Container. If you plan on doing a full installation of Windows 7, you should use aVM.Docker is not meant to be used in that sense.For more details please refer this similar question:Build a Windows 7 Container
We are running on-premises TFS 2017. I would like to create a release definition for our QA team which will create a Docker container running Windows 7, and deploy our release build to it automatically.Once the deployment is done the QA team should be able to log onto the container to test the app. No manual running of a MSI installer or Setup.exe.Ideally each queued release will create its own container with its own copy of the released build.Is this possible? Or recommended? All our servers and hosts will be in-house, we will not be using Azure.Thanks in advance for any advice.
Is it possible to create a docker container from TFS and deploy a release build to it?
It seems that Docker-compose came before the swarm and stack and maybe the new solution of swarm + stack makes compose obsolete, but it still remains for legacy reasons. Is this thinking correct?In short, yes. Compose came before all the Swarm stuff (it originated as a 3rd party utility calledfig). To make matters worse, there are even two different Swarms, old Swarm (the one that was a separate tool) and Swarm Mode (which is built into thedockerbinary these days).It seems to be evolving into services and deployment concepts that get built into Docker. But I would guess Docker Compose and the Swarm Mode deployment stuff will live side by side for a while.It is also beneficial to know that Docker Compose underpinnings live as a library calledlibcompose(https://github.com/docker/libcompose) which other 3rd party utilities make use of to support thedocker-compose.ymlfile format for deploying (see Rancher andrancher-composeas an example). I would imagine they would make an effort to continue support forlibcompose.I am unclear if the Docker Swarm deployment stuff actually useslibcompose. In my cursory searches, it would appear that Swarm Mode doesnotimplementlibcomposeand does its own thing. I am not sure how this relates to the future of Docker Compose andlibcompose. Interpret as you see fit...
From what I read it seems that Docker-Compose is a tool to create multiple containers on a single host while Docker Swarm is a tool that can do the exact same thing but with more control and on multiple hosts with the help of Docker Stack. I went through the tutorial and also came across this thread:docker-compose.yml vs docker-stack.yml what difference?And I'm coming to the conclusion that there's no reason to ever use Docker-Compose when you can use Docker Swarm with Docker Stack. They can even use the same docker-compose.yml.It seems that Docker-compose came before the swarm and stack and maybe the new solution of swarm + stack makes compose obsolete, but it still remains for legacy reasons. Is this thinking correct? If not, what benefits does Docker-Compose have over Docker Swarm and Docker Stack in terms of making a development or production environment?
What benefits does Docker Compose have over Docker Swarm and Docker Stack?
Please take a look at the following compose script. I tried and tested. It works fine.version: '2' services: db: image: mysql:latest container_name: db_server volumes: - ./database/data:/var/lib/mysql - ./database/initdb.d:/docker-entrypoint-initdb.d restart: always environment: MYSQL_ROOT_PASSWORD: password123 # any random string will do MYSQL_DATABASE: udb_test # the name of your mysql database MYSQL_USER: me_prname # the name of the database user MYSQL_PASSWORD: password123 # the password of the mysql user example: depends_on: - db image: wordpress:php7.1 # we're using the image with php7.1 container_name: wp-web environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: me_prname WORDPRESS_DB_PASSWORD: password123 WORDPRESS_DB_NAME: udb_test ports: - "1234:80" restart: always volumes: - ./src:/var/www/htmlLet me know if you encounter further issues.
Ive been making new sites with Wordpress & Docker recently and have a reasonable grasp of how it all works and Im now looking to move some established sites into Docker.Ive been following this guide:https://stephenafamo.com/blog/moving-wordpress-docker-container/I have everything setup as it should be but when I go to my domain.com:1234 I get the error message 'Error establishing a database connection'. I have changed 'DB HOST' to 'mysql' in wp-config.php as advised and all the DB details from the site Im bringing in are correct.I have attached to the mysql container and checked that the db is there and with the right user and also made sure the pw is correct via mysql CLI too.SELinux is set to permissive and I havent changed any dir/file ownership nor permissions and for the latter dirs are all 755 and files 644 as they should be.Edit: I should mention that database/data and everything under that seem to be owned by user/group 'polkitd input' instead of root.Docker logs aren't really telling me much either apart from the 500 error messages for the WP container when I browse the site on port 1234 (as expected though).This is the docker-compose file:version: '2' services: example_db: image: mysql:latest container_name: example_db volumes: - ./database/data:/var/lib/mysql - ./database/initdb.d:/docker-entrypoint-initdb.d restart: always environment: MYSQL_ROOT_PASSWORD: password123 # any random string will do MYSQL_DATABASE: mydomin_db # the name of your mysql database MYSQL_USER: my domain_me # the name of the database user MYSQL_PASSWORD: password123 # the password of the mysql user example: depends_on: - example_db image: wordpress:php7.1 # we're using the image with php7.1 container_name: example ports: - "1234:80" restart: always links: - example_db:mysql volumes: - ./src:/var/www/htmlSuggestions most welcome as Im out of ideas!
Moving Wordpress site to Docker: Error establishing DB connection
I've been having the same problem with the Docker Cassandra image. You can use my docker container onGithubor onDocker hubinstead of the default Cassandra image.The problem is that the cassandra.yaml file has start_rpc set to false. We need to change that. To do that we can use the following Dockerfile (which is what my image does):FROM cassandra RUN sed -i 's/^start_rpc.*$/start_rpc: true/' /etc/cassandra/cassandra.yaml
I'm trying to start up a docker image that runs cassandra. I need to use thrift to communicate with cassandra, but it looks like that's disabled by default. Checking out the cassandra logs shows:INFO 21:10:35 Not starting RPC server as requested. Use JMX (StorageService->startRPCServer()) or nodetool (enablethrift) to start itMy question is: how can I enable thrift when starting this cassandra container?I've tried to set various environment variables to no avail:docker run --name cs1 -d -e "start_rpc=true" cassandra docker run --name cs1 -d -e "CASSANDRA_START_RPC=true" cassandra docker run --name cs1 -d -e "enablethrift=true" cassandra
Enable Thrift in Cassandra Docker
ComposeCompose doesn't have "tasks" as built in concept, but you can set them up with multiple compose files in a project. Adocker-compose-init.ymlcould define the tasks, rather than long running services but you need to manage orchestration yourself. I've put anexample on my consul demo.docker-compose up -d docker-compose -f docker-compose-init.yml run consul_initImage BuildYou can add image buildRUNsteps to add the data. The complication here is running the server the same way you normally would, but in the background, and adding the data all in the oneRUNstep.FROM progrium/consul:latest RUN set -uex; \ consul agent -server --bootstrap -data-dir /consul/data & \ let "timeout = $(date +%s) + 15"; \ while ! curl -f -s http://localhost:8500/v1/status/leader | grep "[0-9]:[0-9]"; do\ if [ $(date +%s) -gt $timeout ]; then echo "timeout"; exit 1; fi; \ sleep 1; \ done; \ consul kv put somekey somevalue;Image StartupSome databases add a script to the image to populate data at startup. This is normally so users can control setup via environment variables injected at run time, likemysql/postgres/mongo.FROM progrium/consul:latest ENTRYPOINT my-entrypoint.shThen your script starts the server, sets up the data, and then at the end continues on as the image would have before.
I am trying to spin a Consul server on docker container and use it as config server for my SpringBoot cloud application. For that I want to have some pre-configured data(Key-Value pairs) in Consul.My current config in docker-compose.yml is:consul: image: "progrium/consul:latest" container_name: "consul" ports: - '9330:8300' - '9400:8400' - '9500:8500' - '9600:53' command: "-server -bootstrap -ui-dir /ui"Is there a way to pre-populate key-value pairs?
How to run Consul on docker with initial key-value pair data?
I use a docker-compose and i add in.envfileREDIS_URL=redis://redis:6379/0
I am getting this error while running my rails app with docker and docker-compose Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)Please find my Docker file# Copy the Gemfile as well as the Gemfile.lock and install # the RubyGems. This is a separate step so the dependencies # will be cached unless changes to one of those two files # are made. COPY Gemfile Gemfile.lock ./ RUN gem install bundler && bundle install --jobs 20 --retry 5 # Copy the main application. COPY . ./app # Expose port 3000 to the Docker host, so we can access it # from the outside. EXPOSE 3000 # The main command to run when the container starts. Also # tell the Rails dev server to bind to all interfaces by # default. CMD ["bundle", "exec", "rails", "server", "-b", "0.0.0.0"]Please find my docker-compose.yml fileversion: "2" services: redis: image: redis command: redis-server ports: - "6379:6379" postgres: image: postgres:9.4 ports: - "5432" app: build: . command: rails server -p 3000 -b '0.0.0.0' volumes: - .:/app ports: - "3000:3000" links: - postgres - redis - sidekiq sidekiq: build: . command: bundle exec sidekiq depends_on: - redis volumes: - .:/app env_file: - .envThanks in Advance!
rails + docker + sidekiq + Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)
Yes, with init-containers it's quite straightforward:apiVersion: v1 kind: Pod metadata: name: thp-test spec: restartPolicy: Never terminationGracePeriodSeconds: 1 volumes: - name: host-sys hostPath: path: /sys initContainers: - name: disable-thp image: busybox volumeMounts: - name: host-sys mountPath: /host-sys command: ["sh", "-c", "echo never >/host-sys/kernel/mm/transparent_hugepage/enabled"] containers: - name: busybox image: busybox command: ["cat", "/sys/kernel/mm/transparent_hugepage/enabled"]Demo (notice that this is a system wide setting):$ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled always [madvise] never $ kubectl create -f thp-test.yaml pod "thp-test" created $ kubectl logs thp-test always madvise [never] $ kubectl delete pod thp-test pod "thp-test" deleted $ ssh THATNODE cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never]
I deploy Redis container via Kubernetes and get the following warning:WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabledIs it possible to disable THP via Kubernetes? Perhaps via init-containers?
Disable Transparent Huge Pages from Kubernetes
Add one of thedelegatedorcachedoptions to the volume mounting your app directory. I've experienced significant performance increases using cached in particular:volumes: - ~/.composer-docker/cache:/root/.composer/cache:delegated - ./:/usr/src/app:cached
I have a signifiant delay and high cpu usage when running my vue.js app on docker instance.This is my docker setupdocker-compose.ymlversion: '2' services: app: build: context: ./ dockerfile: docker/app.docker working_dir: /usr/src/app volumes: - ~/.composer-docker/cache:/root/.composer/cache:delegated - ./:/usr/src/app stdin_open: true tty: true environment: - HOST=0.0.0.0 - CHOKIDAR_USEPOLLING=true ports: - 8080:8080app.docker# base image FROM node:8.10.0-alpine # Create app directory WORKDIR /usr/src/app # Install app dependencies COPY package*.json ./ RUN npm install # Bundle app source COPY . . EXPOSE 8080 CMD [ "npm", "run", "serve"]this setup works fine when i type docker-compose up -d and my app is loading inhttp://localhost:8080/but hot reloading happens after 10 seconds , then 15 seconds like wise it keeps increasing and my laptop cpu usage gets 60% and still increasingi am on a mac book pro with 16 gb ram, and for docker i have enabled 4 cpu's and 6 gb ram.how can this issue be resolved?
Vue.js app on a docker container with hot reload
I changed my dockerfile to have the following COPY statement:COPY out ./That then made the entrypoint to work because it was then able to find out myApp.dll. I think the message could be improved here, but that's me assuming that's what happened
I'm trying to run an .NET Core app on my mac. I'm using VS Core and upgraded the project to .NET 1.1. Everything works fine when I run it through VSCode however when I get to run it using Docker it fails.I do the following steps:dotnet publish -c Release -o out docker build -t myApp .The Dockerfile looks like this:FROM microsoft/dotnet:1.1.0-preview1-runtime WORKDIR /service COPY out ./service/ ENTRYPOINT ["dotnet", "myApp.dll"]Essentially I'm following the steps fromhttps://github.com/dotnet/dotnet-docker. I'm getting all the time the following error:Did you mean to run dotnet SDK commands? Please install dotnet SDK from:http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409I'm not sure what I am missing here...
Run dotnet 1.1 using docker
The temporary solution is set the ttyerasecommand to whatever your backspace key sends in the terminal, when connected via ssh.stty erase ^HThe^Hsequence above is not the literal text but a control character entered by pressingctrl-vand thenbackspace.You can add this command to your.bashrcfile on the docker VM to have it set automatically each time you connect. When editing the file in a terminal you have to enter the string with the same escape sequence as above.Doing this in.bashrcwill seteraseforalllogins to the VM so you may negatively impact the way erase works for other terminal types that connect.You would normally fix this on the client side rather than the server.For example, PuTTY has a specific setting for this. I'm not sure why your powershell/ssh combo doesn't have erase mapped correctly as Docker normally works out of the box. Check what your Docker shortcuts do when launching the docker/ssh terminal and do the same when you launch your terminal to manually connect.
In Windows 10, when I launch MS PowerShell to ssh through a container in Kitematic at windows, I've noticed that I can't backspace or delete, instead I get ^H for backspace instead of actually delete previous character.Do I miss something?
How to backspace or delete?
April 2023 UpdateDocker Desktop 4.18.0has been released which resolves this issue.Original answerThis seems to be aknown issue with Docker Desktop 4.17.1.Older versions can be found here:https://docs.docker.com/desktop/release-notes/
I used to use docker desktop with wsl2 integration and there was no problem running containers with gpu support.However, after a recent update to docker desktop v4.17.1 ( march 2023 ), any containers that I run specifically using the --gpus all tag on wsl hangs forever without any response. The same containers run without any issue unless if specified with the --gpus tag.running cuda container with nvidia-smi on wsl hangs without any responseNote: nvidia-smi works fine in wsl. System: windows 11.Tried fresh installing docker desktop.Tried fresh installation of all wsl distros.wsl distros have access to the gpu and nvidia cuda drivers.able to use the docker desktop within wsl without any issues unless running any container using the --gpus tag hangs without any errors or a response.
Running docker desktop containers with --gpus tag hangs without any response in wsl
UPDATE:You can now delete individual container images straight from the UI.Go to theContainer Registry page.You should see a list of container images. Click the one you want to delete.Select one or more tags, and click the delete button.As of Nov 2015:There is no way to currently delete a single container image from the registry cleanly. Right now, it is basically all or nothing. The GCR team is working on this!Original Answer:I can't think of an easy way to delete individual images. You can delete ALL of the images by deleting the Cloud Storage bucket withgsutil rb gs://artifacts..appspot.com. You can also use the storage browser and try to delete individual parts (https://console.developers.google.com/storage/browser/artifacts..appspot.com) but you would have to know the Docker hashes for each layer!
I have pushed container images usinggcloud docker pushto the Google Container Registry. Two questions:How do I cleanly remove a pushed container image from the registry? (I know I can remove a tag to an image and make it not accessible anymore.)There are a bunch of Docker layers that an image brings with it. I want to remove all the unused layers with an image deletion.
How can I cleanly remove a container image from the Google Container Registry?
You haven't explained why you want to see your container running after your script has exited, or whether or not youexpectyour script to exit.A docker container exits as soon as the container'sCMDexits. If you want your container to continue running, you will need a process that will keep running. One option is simply to put a while loop at the end of your script:while :; do sleep 300 doneYour script will never exit so your container will keep running. If your container hosts a network service (a web server, a database server, etc), then this is typically the process the runs for the life of the container.If instead your script is exiting unexpectedly, you will probably need to take a look at your container logs (docker logs ) and possibly add some debugging to your script.If you are simply asking, "how do I run a container in the background?", then Emil's answer (pass the-dflag todocker run) will help you out.
I am trying to run a shell script in my docker container. The problem is that the shell script spawns another process and it should continue to run unless another shutdown script is used to terminate the processes that are spawned by the startup script.When I run the below command,docker run image:tag /bin/sh /root/my_script.shand then,docker ps -aI see that the command has exited. But this is not what I want. My question is how to let the command run in background without exiting?
docker run a shell script in the background without exiting the container
If you use the a redhat clone with the rpms from centos extras, they have a parameter to do that:--block-registry hostnameThis is a RedHat extension (seehere) which I don't think was accepted upstream (considering13450)
When I mistype things it tries to search in the central registry. How to disable it completely?Is there an option to clear that URL?
Docker: disable pulling from remote registry
CHOKIDAR_USEPOLLINGwill no longer work inreact-scripts^5, as Webpack started using its own filesystem watcher (Watchpack) as a replacement forChokidar, try:environment: - WATCHPACK_POLLING=true
I'm creating a React app with docker in WSL2 and create-react-app and everything seems to be working fine except the app is not updating with changes in the code. I mean, when I make a change in the code, the browser should update the changes automatically, but it doesn't and I have to restart the container to see them. I addedCHOKIDAR_USEPOLLING=trueinENVbut it's not working either. These are the configuration files:dockerfile# pull official base image FROM node:16.13.1 # set working directory WORKDIR /app # add `/app/node_modules/.bin` to $PATH ENV PATH /app/node_modules/.bin:$PATH # install app dependencies COPY package.json ./ COPY package-lock.json ./ RUN npm install # add app COPY . ./ # start app CMD ["npm", "start"]docker-compose.ymlservices: react: build: ./frontend command: npm start ports: - 3000:3000 volumes: - ./frontend:/app env_file: - 'env.react'env.reactCHOKIDAR_USEPOLLING=truepackage.json{ "name": "app", "version": "0.1.0", "private": true, "dependencies": { "@testing-library/jest-dom": "^5.16.1", "@testing-library/react": "^12.1.2", "@testing-library/user-event": "^13.5.0", "mdbreact": "^5.2.0", "react": "^17.0.2", "react-dom": "^17.0.2", "react-painter": "^0.4.0", "react-router-dom": "^6.2.1", "react-scripts": "5.0.0", "sass": "^1.45.1", "web-vitals": "^2.1.2" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "react-scripts test", "eject": "react-scripts eject" }, "eslintConfig": { "extends": [ "react-app", "react-app/jest" ] }, "browserslist": { "production": [ ">0.2%", "not dead", "not op_mini all" ], "development": [ "last 1 chrome version", "last 1 firefox version", "last 1 safari version" ] } }Can you see what I'm doing wrong? Thanks!
Docker with create react app is not updating changes
This turned out to be easy, and I did it through editing the csproj file: ChangedWindowstoLinuxand reload.I am still not sure where you would do this from Visual Studio (if it possible).
I have a .NET Core Web Application project where I chose the incorrect OS under the "Enable Docker Support" checkbox:How do I change this for an existing project? And to be clear, I want to target linux, not "Switch to Windows Containers..." in docker.
How do you change Docker OS Support for a .Net Core Web Application Project?
By “the file already exists”, do you mean that the file is on your host at/prometheus-data/prometheus.yml? If so, then you need to bind mount it into your container for it to be accessible to Prometheus.sudo docker run -p 9090:9090 -v /prometheus-data/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheusIt's covered underVolumes & bind-mountin the documentation.
I am trying to load prometheus with docker using the following custom conf file:danilo@machine:/prometheus-data/prometheus.yml:global: scrape_interval: 15s # By default, scrape targets every 15 seconds. # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: monitor: 'codelab-monitor' # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. scrape_configs: # The job name is added as a label `job=` to any timeseries scraped from this config. - job_name: 'prometheus' # Override the global default and scrape targets from this job every 5 seconds. scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - targets: ['localhost:8083', 'localhost:8080'] labels: my_app group: 'my_app_group'With the following command:$ sudo docker run -p 9090:9090 prom/prometheus --config.file=/prometheus- data/prometheus.ymlThe file already exists. However, I am getting the following message:level=error ts=2018-09-26T17:45:00.586704798Z caller=main.go:617 err="error loading config from "/prometheus-data/prometheus.yml": couldn't load configuration (--config.file="/prometheus-data/prometheus.yml"): open /prometheus-data/prometheus.yml: no such file or directory"I'm following this guide:https://prometheus.io/docs/prometheus/latest/installation/What can I do to load this file correctly?
Can't load prometheus.yml config file with docker (prom/prometheus)
I had to grant the root user at the Django container permissions to access the DB:GRANT ALL PRIVILEGES ON *.* TO 'root'@'172.17.0.3' IDENTIFIED BY 'password' WITH GRANT OPTION; SET PASSWORD FOR root@'172.17.0.3' = PASSWORD('root'); FLUSH PRIVILEGES;Where172.17.0.3is the IP of the container with the app.MYSQL_ROOT_HOSTis not needed.
When I try to connect from a docker container running my Django app to a container running MySQL, I get the following error:django.db.utils.OperationalError: (2003, "Can't connect to MySQL server on '172.17.0.2' (111)")Here's how I'm running the MySQL container:$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=root -e MYSQL_DATABASE=testdb -e MYSQL_ROOT_HOST=172.17.0.2 -d mysql/mysql-server:5.7If I don't specifyMYSQL_ROOT_HOST, I get this error when I try to connect from the container with the Django app:django.db.utils.OperationalError: (1130, "Host '172.17.0.3' is not allowed to connect to this MySQL server")Here are my Django settings:DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'testdb', 'USER': 'root', 'PASSWORD': 'root', 'HOST': '172.17.0.2', 'PORT': '', } }I've verified the MySQL container is using IP 172.17.0.2:$ docker inspect mysql |grep -i ipaddress "SecondaryIPAddresses": null, "IPAddress": "172.17.0.2", "IPAddress": "172.17.0.2",
Cannot connect to MySQL docker container from container with Django app
Amazon has fixed this problem in version of the Elastic Beanstalk AL2 platformsreleased on 04-AUG-2020.It has been fixed so that log task customization on AL2-based platforms now works the way it has always worked (i.e. on the prevision generation AL2018 platforms) and you can therefore follow theofficial documentationin order to make this happen.Succesfully tested with platform"Docker running on 64bit Amazon Linux 2/3.1.0". If you (still) use"Docker running on 64bit Amazon Linux 2/3.0.x"then you must use the undocumented workarounddescribed in Marcin's answerbut you are probably better off by upgrading your platform version.
I'm wondering how to dolog task customizationin the new Elastic Beanstalk platform (the one based on Amazon Linux 2). Specifically, I'm comparing:Old: Single-container Docker running on 64bit Amazon Linux/2.14.3New: Single-container Docker running on 64bit Amazon Linux 2/3.0.0(My question actually has nothing to do with Docker as such, I'm speculating the problem exist for any of the new Elastic Beanstalk platforms).Previously I could follow Amazon's recipe, meaning put a file into/opt/elasticbeanstalk/tasks/bundlelogs.d/and it would then be acted upon. This is no longer true.Has this changed? I can't find itdocumented. Anyone been successful in doing log task customization on the newer Elastic Beanstalk platform? If so, how?Minimal working exampleI've created a minimal working example and deployed on both platforms.Dockerfile:FROM ubuntu COPY daemon-run.sh /daemon-run.sh RUN chmod +x /daemon-run.sh EXPOSE 80 ENTRYPOINT ["/daemon-run.sh"]Dockerrun.aws.json:{ "AWSEBDockerrunVersion": "1", "Logging": "/var/mydaemon" }daemon-run.sh:#!/bin/bash echo "Starting daemon" # output to stdout mkdir -p /var/mydaemon/deeperlogs while true; do echo "$(date '+%Y-%m-%dT%H:%M:%S%:z') Hello World" >> /var/mydaemon/deeperlogs/app_$$.log sleep 5 done.ebextensions/mydaemon-logfiles.config:files: "/opt/elasticbeanstalk/tasks/bundlelogs.d/mydaemon-logs.conf" : mode: "000755" owner: root group: root content: | /var/log/eb-docker/containers/eb-current-app/deeperlogs/*.logIf I do "Full Logs" action on the old platform I would get a ZIP with mydeeperlogsincluded insidevar/log/eb-docker/containers/eb-current-app. On the new platform I don't.InvestigationIf you look on the disk you'll see that the new Elastic Beanstalk doesn't have a/opt/elasticbeanstalk/tasksfolder at all, unlike the old one. Hmm.
Elastic Beanstalk: log task customization on Amazon Linux 2 platforms
I had the same issue and found the solution to this. You will need to change the default listening hostname. By default, the app will listen to the localhost, ignoring any incoming requests from outside the container. Change your code inprogram.csto listen for all incoming calls:public class Program { public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseUrls("http://*:5000") .UseIISIntegration() .UseStartup() .Build(); host.Run(); } }More info:https://medium.com/trafi-tech-beat/running-net-core-on-docker-c438889eb5a#.hwhoak2c0Make sure you are running the container with the-pflag with the port binding.docker run -p 5000:5000 At the time of this writing, it seems that theEXPOSE 5000:5000command does not have the same effect as the-pcommand. I haven't had real success with -P (upper case) either.Edit 8/28/2016:One thing to note. If you are using docker compose with the -d (detached) mode, it may take a little while for thedotnet runcommand to launch the server. Until it's done executing the command (and any other before it), you will receiveERR_EMPTY_RESPONSEinChrome.
I've created a app in asp.net core and create a dockerfile to generate a local image and run it.FROM microsoft/dotnet:latest COPY . /app WORKDIR /app RUN ["dotnet", "restore"] RUN ["dotnet", "build"] EXPOSE 5000/tcp ENTRYPOINT ["dotnet", "run", "--server.urls", "http://0.0.0.0:5000"]Then, I've builded my image with the following commanddocker build -t jedidocker/first .Then, I've created a container with the following commanddocker run -t -d -p 5000:5000 jedidocker/firstBut when I run the following url into my host browserhttp://localhost:5000/I have anERR_EMPTY_RESPONSEis it a network problem? How can I solve it?P.S: My windows version is 10.0.10586
Cannot acces Asp.Net Core on local Docker Container
This answer assumes that by "... spring-boot application to use buildpacks" you mean the use of thespring-boot:build-imagemaven goal.The issue lays with the default builder (gcr.io/paketo-buildpacks/builder:base) used by the maven plugin. Builder is responsible for configuring the OS image, and the "base" builder doesn't includefontconfigpackage. .The easiest way to enablefontconfigpackage is to use the "full" builder (gcr.io/paketo-buildpacks/builder:full-cforgcr.io/paketo-buildpacks/builder:latest); you can do so for example in one of the following ways:by specifying the builder configuration parameter in the maven plugin, org.springframework.boot spring-boot-maven-plugin 2.3.3.BUILD-SNAPSHOT gcr.io/paketo-buildpacks/builder:latest or directly on yourmvncommand line by adding-Dspring-boot.build-image.builder=gcr.io/paketo-buildpacks/builder:latest.However, this is not ideal because the full OS image is much larger (approx. 1.45GB for "full" vs. 644MB for "base" - observed in docker image listing), a fair bit of overhead "just" for enablingfontconfig.A more involved approach would require creating a custom builder with custom mixins, in order to create a tailored "base" image with the extra packages. But I personally found it easier to just use the dockerfile approach in this scenario. Some articles on creating a custom builder:https://buildpacks.io/docs/operator-guide/create-a-builder/https://medium.com/@srinivasan.surprise/unpack-cloud-native-buildpacks-9959b601424b
I updated my spring-boot application to use buildpacks to create my docker-image instead of a dockerfile. I also use Apache POI in my application and since that update I get an error when generating an xlsx file. After some digging, I think it happens because thefontconfigand/orttf-dejavupackages are missing. But how do I add these in the dockerimage? With a dockerfile I would just add something likeRUN apt-get update && apt-get install fontconfig ttf-dejavuBut how do I achieve the same with buildpacks?
How to add extra linux dependencies into a spring-boot buildpack image?
You are starting the app for some reason (usingdocker run) you might don't need. Thedpltool is intended to be used inside a codebase, rather than for image deployment. As you saidbuild_image: image: docker:latest services: - docker:dind stage: package script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com - docker build -t registry.gitlab.com/maciejsobala/myApp . - docker push registry.gitlab.com/maciejsobala/myApp:latestis working, what means your runner is able to run docker in docker and successfully pushing images. For heroku deployment, you must only push that image to the heroku docker registry, according tothe official heroku documentation. In short you do adeploy_to_heroku: stage: deploy services: - docker:dind script: - docker login --email=_ --username=_ --password= registry.heroku.com - docker tag registry.gitlab.com/maciejsobala/myApp:latest registry.heroku.com/maciejsobala/myApp:latest - docker push registry.heroku.com/maciejsobala/myApp:latestwith your heroku auth token, which you can get byheroku auth:tokenAs said in the documentation, pushing to herokus registry triggers a release process of the app.
I got project repository hosted on gitlab. I am using gitlab-ci to build docker container from my project. What I would like to achieve is deploying that container to heroku.I was trying to follow solution from this question:How to build, test and deploy using Jhipster, Docker, Gitlab and HerokuHere is how my.gitlab-ci.yamllooks like:stages: - build - package - deploy build_npm: image: node:latest stage: build script: - npm install - npm run build:prod artifacts: paths: - dist/ build_image: image: docker:latest services: - docker:dind stage: package script: - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com - docker build -t registry.gitlab.com/maciejsobala/myApp . - docker push registry.gitlab.com/maciejsobala/myApp:latest deploy_to_heroku: stage: deploy services: - docker:dind script: - gem install dpl - docker run registry.gitlab.com/maciejsobala/myApp:latest - dpl --provider=heroku --app= myApp --api-key=$HEROKU_API_KEYWhat I am trying to achieve is, have 3 stages:build: at this moment, compile only npm project (in the future, I want to add somejarhere)package: create and push to registry docker image.deploy: install docker image on heroku.I am running into issues with the last stage (deploy). To be honest I am not really sure, what should be done here.I tried to use dpl, regarding to this tutorial:https://docs.gitlab.com/ce/ci/examples/test-and-deploy-ruby-application-to-heroku.htmlUnfornatelly I am running into issues when trying to run docker image$ docker run registry.gitlab.com/maciejsobala/myApp:latest /bin/bash: line 49: docker: command not foundI am completely blind here. I would really appreciate any solutions, links to articles/tutorials etc.
Deploy docker container from external registry to Heroku

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
16
Add dataset card