Selinux NFS home directories

Note to self:

If you’re using an NFS mounted home directory on a machine with SELinux enforcing make sure you do:

setsebool -P use_nfs_home_dirs 1

I don’t know what else breaks without this, but key-based ssh authentication fails.

Latex add both appendices and annexes

I’m using the appendix package in a latex document to add appendices and have them in my table of contents. I’m also using the hyperref package to get links from the toc to the sections. I wanted to add a further ‘annexes’ part to the document.

This post gave an example of how to use the appendix package to add additional parts of the document with chapters starting from 1 again. It almost worked but in the toc the annex chapters were linked to the appendix chapters (so annex I was linked to appendix A etc). When I printed out the value of @currentHref (what hyperref uses to link to in the toc) in my annex chapters, it was listing them as appendix.A. If I did a renewcommand on theHchapter (hyperref’s counter) after the change to the definition of thechapter, I at least get appendix.I so the roman numerals bit is working. I can’t for the life of me figure out where the ‘appendix’ prefix is coming from as changed the values of appendixname, appendixpagename and appendixtocname. In the end, I just gave up and added an additional \annexname in theHchapter definition. @currentHref ends up as appendix.Annex.I, but at least it’s unique so the links work in the toc. Pretty sure I’m missing something obvious, but at least this works for now:

% Create Roman Numeral Labelled Annexes
\makeatletter % treat @ as a letter instead of a control word.

and the in the document, something like:



Big Data in Mental Health Symposium – Live Stream (23/07/2014)

Dockerised BRAT annotation tool

Been playing with Docker this week. First attempt is a dockerized version of the brat annotation tool

Dockerfile and config bits on github:

Image on dockerhub, so you can just run with:

docker run -d -p 8000:80\

Selinux colouring book!


max file limits on boot2docker

Am trying to run apache2 on an ubuntu-flavoured docker container using boot2docker on OSX and I’m hitting a problem:

# service apache2 start
* Starting web server apache2 /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted)
Setting ulimit failed. See README.Debian for more information.

A ulimit -n on the container says 1024 and trying to set it any higher fails. This is because it’s a limit coming from the host (the boot2docker vm)

$ boot2docker ssh

$ ulimit -n

We can change this in the boot2docker vm as the physical host (my macbook) has higher limits:

$ boot2docker ssh

$ sudo su
$ ulimit -n 8192
$ ulimit -n

ulimit only changes the setting for the current shell and processes started by it, so as soon as you drop back to a regular user

$ exit
$ ulimit -n
$ 1024

boot2docker uses a stripped down linux flavour based on tinycore. This doesn’t use PAM, so it doesn’t have the /etc/security/limits.conf file which would normally be used to set global limits.

The only solution I could come up with is to add a call to ulimit in the docker init script on the boot2docker machine:

vi /etc/init.d/docker

ulimit -n 8192

start() {
mkdir -p "$DOCKER_DIR"

... etc ...

Then to restart the docker daemon:

sudo /etc/init.d/docker restart

This change is only temporary though. If you want to add it to the boot2docker iso permanently you could always build your own iso:

Linux overcommit of memory

VM with limited memory fell over today, so I had to learn a bit about linux memory allocation. Notes for future reference:

The Linux kernel generally responds to memory requests positively. By default, it uses a heuristic algorithm that will deny blatantly unserviceable requests, but allow a degree of overcommitment on the grounds that most processes ask for more than they need most of the time. This works reasonably well usually but if you’re overallocating there’s an inevitable risk of an out of memory error. When this happens, the oom-killer is invoked and will gleefully traipse through your system slaughtering processes according to its own arcane whims*. This can properly mess up your system, so once it’s been invoked you probably want to consider a full reboot in case it’s knocked out any fundamental processes (hopefully once you’ve investigated and fixed whatever is causing you to run out of memory).

In some situations it might be better to be more strict about over-commitment of memory in the first place. The parameter vm.overcommit_memory controls how the kernel responds to requests for memory and takes one of three values:

0: The default. Use the heuristic thing. Allocate memory unless a process is obviously taking the piss.

1: Just hand out memory, don’t worry about overcommitment (so big risk of oom, but potentially better performance for memory intensive tasks)

2: Don’t hand out memory if the request is bigger than the total available swap + vm.overcommit_ratio% RAM

The default vm.overcommit_ratio setting is 50%.

* It uses some heuristic to determine which processes would be least damaging to kill off, but it can still take out vital stuff.