GUI Qt Application in a docker container

GUI Qt Application in a docker container

I try to run some Qt application in a container docker with a mount of /tmp/.X11-unix. I saw here that can be difficult.
So when I run kdevelop in container docker, it does not work ( I have an empty window ). But if I run qtcreator it’s fine.
I think the difference comes from the Qt version (kdevelop are developped in Qt4 and qtcreator in Qt5). All my others qt5 applications works fine, but not a single in qt4.
Question:
Does anyone know what to do to launch an qt4 application without going through vnc or ssh, just like this:
docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix someQt4AppContainer

?

Solutions/Answers:

Solution 1:

Qt5 and Qt4 has a different rendering subsystems

Qt4 renderer just needs a hint:

export QT_GRAPHICSSYSTEM="native"

This must be work

QT_GRAPHICSSYSTEM="native" docker run -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix someQt4AppContainer

References

How to connect with JMX from host to Docker container in Docker machine?

How to connect with JMX from host to Docker container in Docker machine?

When I have running Docker container directly at my host, it is possible to connect to it without any problems.
My host has network 192.168.1.0/24 and IP address of the host is 192.168.1.20. My Docker container has IP address 172.17.0.2. When I connect to 172.17.0.2:1099 from jconsole it works.
When I put this service into Docker machine, it is not possible to connect to it.
My Docker machine has IP 192.168.99.100 and container in it has IP address 172.17.0.2 but when I use jconsole to connect to 192.168.99.100:1099 it does not work.
To repeat it:
192.168.1.20 — 172.17.0.2:1099 works
192.168.1.20 — (192.168.99.100 — 172.17.0.2:1099) and connecting to 192.168.99.100:1099 from my host does not work.
It is worth to say that I can access services containerized in Docker machine via external IP address of the Docker machine, e.g. this will work:
192.168.99.100 — (192.168.99.100:8080 — 172.17.0.2:8080)
But when I use JMX it just does not work.
It is Tomcat service. I have this in scripts which starts Tomcat instance:
CATALINA_OPTS=”-Xdebug -Xrunjdwp:transport=dt_socket,address=8000,server=y,suspend=n \
-Dcom.sun.management.jmxremote.port=1099 \
-Dcom.sun.management.jmxremote.rmi.port=1099 \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=IP address of Docker container

Solutions/Answers:

Solution 1:

I think the problem is probably the value of the java.rmi.server.hostname property. This needs to be the hostname or IP address that should be used by the JMX client to connect to your JVM. That is in the first case where you connect to your container directly using 172.17.0.2:1099, this setting needs to be set to 172.17.0.2. In the latter case where you access the container through the docker machine on 192.168.99.100:1099, the setting needs to be set to 192.168.99.100.

During my research for a very similar question (which was deleted in the meantime) I stumbled across a blog entry (which was deleted in the meantime as well). Although It’s rather old it gave me an idea how the JMX connectivity works:

  1. The JMX registry listens on port <com.sun.management.jmxremote.port> of the container
  2. If you connect to the registry with JConsole, the registry provides the JMX service URL to the client.
  3. This URL is used by the client to obtain the JMX objects

The service URL looks like this service:jmx:rmi:///jndi/rmi://<java.rmi.server.hostname>:<com.sun.management.jmxremote.rmi.port>/jmxrmi. That is in your case service:jmx:rmi:///jndi/rmi://172.17.0.2:1099/jmxrmi. As this address is only reachable from within the docker machine, connecting from remote is not possible. In my question I cover the same problem in regards to the RMI port…

There doesn’t seem to be an out-of-the-box solution to this problem. However one can provide both JMX port and the external hostname (or IP) on startup of the container as environment variables, as suggested here. These could then be used in the JMX config:

docker run -p 1099:1099 \
    -e "JMX_HOST=192.168.99.100" \
    -e "JMX_PORT=1099" \
    company/tomcat:8.0.30

and

CATALINA_OPTS="... \
    -Dcom.sun.management.jmxremote=true \
    -Dcom.sun.management.jmxremote.port=$JMX_PORT \
    -Dcom.sun.management.jmxremote.rmi.port=$JMX_PORT \
    -Dcom.sun.management.jmxremote.authenticate=false \
    -Dcom.sun.management.jmxremote.ssl=false \
    -Djava.rmi.server.hostname=$JMX_HOST"

Not very nice, but it should work…

Solution 2:

If anyone has problems with it. I have started the java process in the docker container with the following parameters:

-Dcom.sun.management.jmxremote 
-Dcom.sun.management.jmxremote.port=9876 
-Dcom.sun.management.jmxremote.rmi.port=9876 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.authenticate=false 
-Djava.rmi.server.hostname=<name of the docker container>

The important part is to set the name of the docker container. EXPOSE the port in the container 9876. I have also setup an ssh connection and forwarded 9876 to the localhost.

The following goes to your SSH config:

LocalForward 127.0.0.1:9876 127.0.0.1:9876

Also I have setup /etc/hosts on the local machine

127.0.0.1 <name of the docker container>

Now connect your console to “name of the docker container”

References

Pass command line arguments to Docker with Ansible

Pass command line arguments to Docker with Ansible

I have a Java socket application that requires a port number as a CLI argument. On my local machine I can run it via:
docker run -d -p 1111:1111 –name –link / 1111

The problem is that I haven’t found a solution to pass the port number when using Ansible (I have a different task that pulls the image). Current task:
– name: Run server
docker:
name:
image: /
state: reloaded
ports:
– “1111:1111”
links:
– “

Is there a way to pass the port as a CLI argument? Or is there a simple way to work around this? I’ve though about uploading a new image or using the command module but neither seem like right way to go.

Solutions/Answers:

Solution 1:

There is no native support to pass arbitrary arguments in Ansible’s Docker module. See passing extra args to docker: task.

Can’t you use shell module to achieve what you want?

Solution 2:

If you can change the image I would recommend to use environment vars instead . That’s supported by the docker module.

- name: Run server
  docker:
    name: <name>
    image: <foo>/<bar>
    state: reloaded
    ports:
      - "1111:1111"
    links:
      - "<link>"
    env:
      MY_PORT: 1111

References

GitLab CI Runner, how to use volumes or mounts in service containers

GitLab CI Runner, how to use volumes or mounts in service containers

I use GitLab CI Runner, it uses the command:
docker run -d –name postgres postgres:9.4

I want to do something like this:
docker run -d –name postgres –volumes-from postgres_datastore postgres:9.4

But GitLab CI Runner doesn’t support any options (-v or –volumes-from).
Is there any other way?

Solutions/Answers:

Solution 1:

The Docker volumes-from option is not yet available in Gitlab CI Runner (see this PR), however you can configure host mounts and volumes:

[runners.docker]
  volumes = ["/host/path:/target/path:rw", "/some/path"]

The above example would mount /host/path at /target/path/ inside the container and also create a new volume container at /some/path.

See the Gitlab CI Runner manual for all docker related options.

Edit:

For service containers it seems you can only define volumes via the dockerfile of the service image. Maybe enough depending on your requirements.

References

How to start docker-machine on OSX?

How to start docker-machine on OSX?

I’ve installed and use docker for the first time yesterday,
Everything was working properly, but yesterday night I’ve shutdown my computer.
Today I start it and I wanted to work on my docker app, But when I try to run it like
docker run -d -p 8080:8080 container/app
I got the error :

docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.

But I can’t find how to launch docker again…
Ideas ?
EDIT :
eval “$(docker-machine env default)”

result:

Error checking TLS connection: Host is not running

Solutions/Answers:

Solution 1:

The docker-machine env default command won’t work if the “default” machine is not running.

You can run the docker-machine ls command, which should give you a list of machines that are configured, and their current status (running, stopped).

If a machine is stopped, run docker-machine start <name-of-machine>. After that you should be able to set the environment variables using

eval "$(docker-machine env default)"

please read the documentation at https://docs.docker.com/machine/overview for more details

Solution 2:

After I run the command $ docker-machine start default, I get the message of

Starting "default"...
(default) Check network to re-create if needed...
(default) Creating a new host-only adapter produced an error: /usr/local/bin/VBoxManage hostonlyif create failed:
(default) 0%...
(default) Progress state: NS_ERROR_FAILURE
(default) VBoxManage: error: Failed to create the host-only adapter
(default) VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
(default) VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
(default) VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg *)" at line 94 of file VBoxManageHostonly.cpp
(default) 
(default) This is a known VirtualBox bug. Let's try to recover anyway...
Error setting up host only network on machine start: The host-only adapter we just created is not visible. This is a well known VirtualBox bug. You might want to uninstall it and reinstall at least version 5.0.12 that is supposed to fix this issue

A solution is provided at the end of the message,

This is a known VirtualBox bug. Let's try to recover anyway...
Error setting up host only network on machine start: The host-only adapter we just created is not visible. This is a well known VirtualBox bug. You might want to uninstall it and reinstall at least version 5.0.12 that is supposed to fix this issue

I uninstall and then, try to reinstall the VirtualBox. It was still same.

Then, I allowed the apps from anywhere in the machine and this solved the issue:

Allow Apps from Anywhere: sudo spctl --master-disable

enter image description here

References

Cannot (apt-get) install packages inside docker

Cannot (apt-get) install packages inside docker

I installed ubuntu 14.04 virtual machine and run docker(1.11.2). I try to build sample image (here).
docker file :
FROM java:8

# Install maven
RUN apt-get update
RUN apt-get install -y maven
….

I get following error:
Step 3: RUN apt-get update
–> Using cache
—>64345sdd332
Step 4: RUN apt-get install -y maven
—> Running in a6c1d5d54b7a
Reading package lists…
Reading dependency tree…
Reading state information…
E: Unable to locate package maven
INFO[0029] The command [/bin/sh -c apt-get install -y maven] returned a non-zero code:100

following solutions I have tried, but no success.

restarted docker here
run as apt-get -qq -y install curl here :same error 🙁

how can i view detailed error message ?
a
any way to fix the issue?

Solutions/Answers:

Solution 1:

you may need to update os inside docker before

try to run apt-get update first, then apt-get install xxx

Solution 2:

The cached result of the apt-get update may be very stale. Redesign the package pull according to the Docker best practices:

FROM java:8 

# Install maven
RUN apt-get update \
 && DEBIAN_FRONTEND=noninteractive \
    apt-get install -y maven \
 && apt-get clean \
 && rm -rf /var/lib/apt/lists/*  

Solution 3:

Based on similar issues I had, you want to both look at possible network issues and possible image related issues.

  • Network issues : you are already looking at proxy related stuff. Make sure also the iptables setup done automatically by docker has not been messed up unintentionnaly by yourself or another application. Typically, if another docker container runs with a net=host option, this can cause trouble.

  • Image issues : The distro you are running on in your container is not Ubuntu 14.04 but the one that java:8 was built from. If you took the java image from official library on docker hub, you have a hierarchy of images coming initially from Debian jessie. You might want to look the different Dockerfile in this hierarchy to find out where the repo setup is not the one you are looking at.

For both situations, to debug this, I recommand you run inside the latest image a shell to look the actual network and repo situation in your image. In your case

docker run -ti --rm 64345sdd332 /bin/bash

gives you a shell just before running your install maven command.

Solution 4:

I am currently working behind proxy. it failed to download some dependency. for that you have to mention proxy configuration in docker file. ref

but, now I facing difficulty to run "mvn", "dependency:resolve" due to the proxy, maven itself block to download some dependency and build failed.

thanks buddies for your great support !

References

Docker Django 404 for web static files, but fine for admin static files

Docker Django 404 for web static files, but fine for admin static files

please help me on this docker django configuration for serving static files.
my Django project running on Docker got some issues with delivering static files.

All static files for admin view is loading fine, but static files for client web view is throwing 404 Not found Error.

This is my docker.yml configuration details:
web:
build: ./web
expose:
– “8000”
links:
– postgres:postgres
volumes:
– ./web:/usr/src/app
ports:
– “8000:8000”
env_file: .env
command: python manage.py runserver 0.0.0.0:8000

postgres:
image: postgres:latest
volumes:
– /var/lib/postgresql
ports:
– “5432:5432”

update
This is the admin static file url will look like :
http://developer.com:8000/static/admin/css/base.css
and this is how client static file url looks like:
http://developer.com:8000/static/css/base.css
Where those admin folder in static directory is creator by running django command collectstatic
I have used this setting previously, and was working fine. But when I moved the project root folder to another directory seems have this issue.
I am totally stuck here, many many thanks for all your help and feedback.

Solutions/Answers:

Solution 1:

This was issue with the STATICFILES_DIRS configuration in the settings.py file.

This setting defines the additional locations the staticfiles app will traverse if the FileSystemFinder finder is enabled, e.g. if you use the collectstatic or findstatic management command or use the static file serving view.

Following was the configuration in my settings.py:

STATIC_URL = '/static/'
STATIC_ROOT      =  os.path.join(BASE_DIR, "static") 

Now I updated this code to:

STATIC_URL = '/static/'
STATICFILES_DIRS = [
    os.path.join(BASE_DIR, "static"),
]

And every files is loading fine.

Reference Link

Solution 2:

Use Whitenoise to make your life easier when dealing with static files in django.

1.If you are using docker-compose,add whitenoise to your requirements.txt file:

whitenoise==3.3.1

2.Add whitenoise to your middleware apps inside settings.py

MIDDLEWARE_CLASSES = [# 'django.middleware.security.SecurityMiddleware','whitenoise.middleware.WhiteNoiseMiddleware',# ...]

make sure that you add this below your security.SecurityMiddleware app

3.Finally, change the following variables inside settings.py

STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')

STATIC_URL = '/static/'

STATICFILES_DIRS = (os.path.join(BASE_DIR,'<app_name>/static'),os.path.join(BASE_DIR, 'static'),)

Be sure to replace with the name of your app. Note that this only applies if your static files are stored in(for example) my_project/app/static/app/.

Otherwise if your static folder is located in my_project/app/static:

STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
  1. Lastly disable the built-in django static file server as follows:

    INSTALLED_APPS = [
    # ...
    'whitenoise.runserver_nostatic',
    'django.contrib.staticfiles',
    # ...]
    

Solution 3:

As you have moved your project to another directory, there is a possibility that the path of your static directories are also different now. Django in most scenarios use apache, nginx or some other web servers to serve static files. One point to notice is that your static directory should be accessed publicly. I had gone through a problem like this before. What I did was I moved static dir to document root mentioned in apache config file.

So move your static files to the doc root of apache and update static directories in settings.py to refer to the static directory in your apache doc root. I hope this helps.

References

Does docker reuse images when multiple containers run on the same host?

Does docker reuse images when multiple containers run on the same host?

My understanding is that Docker creates an image layer at every stage of a dockerfile.
If I have X containers running on the same machine (where X >=2) and every container has a common underlying image layer (ie. debian), will docker keep only one copy of the base image on that machine, or does it have multiple copies for each container?
Is there a point this breaks down, or is it true for every layer in the dockerfile?
How does this work?
Does Kubernetes affect this in any way?

Solutions/Answers:

Solution 1:

Dockers Understand images, containers, and storage drivers details most of this.

From Docker 1.10 onwards, all the layers that make up an image have an SHA256 secure content hash associated with them at build time. This hash is consistent across hosts and builds, as long as the content of the layer is the same.

If any number of images share a layer, only the 1 copy of that layer will be stored and used by all images on that instance of the Docker engine.

A tag like debian can refer to multiple SHA256 image hash’s over time as new releases come out. Two images that are built with FROM debian don’t necessarily share layers, only if the SHA256 hash’s match.

Anything that runs the Docker Engine underneath will use this storage setup.

This sharing also works in the Docker Registry (>2.2 for the best results). If you were to push images with layers that already exist on that registry, the existing layers are skipped. Same with pulling layers to your local engine.

References

How to enable logging for iptables inside a Docker container?

How to enable logging for iptables inside a Docker container?

I created some Docker images lately in order to set up a container with open VPN and firewall (iptables) support.
So far most things are working fine, but as I have some issues with the firewall, I added some more iptables rules to log dropped packages to /var/log/messages. I realized though, that even if something is dropped, no log file can be found under /var/log.
Thus my question is: How does Alpine Linux handle (system) logging and how can I check the iptables log specifically?
UPDATE
As larsks pointed out, default logging has been disabled in the kernel in order to prevent DDOS attacks by flooding logs.
In order to get logging to work, I installed ulogd and followed the instructions from here.

Solutions/Answers:

Solution 1:

The problem is not Alpine Linux. The problem is that you are trying to log from the iptables stack inside a Docker container, and to the best of my knowledge kernel doesn’t handle messages generated by iptables LOG targets in network namespaces other than the global one. LOG messages in network namespaces are intentionally suppressed to prevent a container from performing a DOS attack on the host with a high volume of log messages. See this commit in the kernel, which explicitly disabled LOG support in containers.

Your best bet is to look at packet counts on your firewall rules to see what is matching and where packets are being dropped. You may also have some luck with the NFLOG target and ulogd.

References

Why doesn’t my newly-created docker have a digest?

Why doesn’t my newly-created docker have a digest?

I have been following the Docker tutorial here, and built a test image on my local OSX machine by committing changes to an existing image and tagging it with three different labels:
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
adamatan/sinatra devel fccb6b4d21b4 8 minutes ago 469.5 MB
adamatan/sinatra junk fccb6b4d21b4 8 minutes ago 469.5 MB
adamatan/sinatra latest fccb6b4d21b4 8 minutes ago 469.5 MB

However, none of these images has a digest:
# docker images –digests adamatan/sinatra
REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE
adamatan/sinatra devel fccb6b4d21b4 9 minutes ago 469.5 MB
adamatan/sinatra junk fccb6b4d21b4 9 minutes ago 469.5 MB
adamatan/sinatra latest fccb6b4d21b4 9 minutes ago 469.5 MB

Other test images I have created with a Dockerfile do have a digest.
Why do some images have a digest and some don’t? Is it related to the way the images were created (Dockerfile or not)?

Solutions/Answers:

Solution 1:

Firstly, Please keep in mind that a digest could represent a manifest, a layer or a combination of them (we normally called that combination an image).

Manifest is a new term that introduced with Docker registry V2. Here is a short description fetched from Docker Registry V2 slides page21 ~ page23:

  • [Manifest] describes the components of an image in a single object
    • Layers can be fetched immediately, in parallel.

When you get the digests with command docker images --digests, here the digest is the SHA256 hash of image manifest, but image ID is the hash code of the local image JSON configuration (this configuration is different from manifest). In this case, if an image doesn’t have an associated manifest, the digest of that image will be “none”.

Normally, two scenarios could make an image doesn’t have associated manifest:

  1. This image has not been pushed to or pulled from a V2 registry.
  2. This image has been pulled from a V1 registry.

To generate a manifest, the easiest way is to push the image to a V2 registry (V1 registry will not works). Docker client will generate a manifest locally, then push it with image layers to registry. When you pull the image back, the image will has a manifest.

Once the manifest existing, your image digest should not be “none”.

Solution 2:

Yes it is related to how the images were created. Docker can be a real stinker at times.

This may be helpful for you in this case.

References