In thinking more about the per-user filesystem that I proposed earlier I realized: Why should any process see files it doesn't need? If they aren't required, let's make them disappear. Then the question becomes: How do we know what files a program needs? The answer is: package management.

In Debian based Linux systems this is all handled by dpkg. Dpkg knows about every package, what files it contains, and which files it requires to be installed. Assuming all the packages are configured correctly, an application in a package shouldn't need any more files than those in it's package and it's requirements. So let's hide all the others.

The real question then becomes performance. For every program startup we would have to check which package the executable is in, find the files for it and all it's requirement, and then build a filesystem with only those files included. But, I think with caching and progressive loading those problems could be solved. The reality is that most applications rarely get directory listings. Most of the time programs ask for specific files that they've been compiled with like /lib/libmylib.so or /usr/share/myapp/icon.png. So the database would only need to be queried for specific files not all the files that the application could need.

Secuirty frameworks like SELinux and AppArmor try to achieve many of the same goals with files on the filesystem (they also do more with things like networking). But, for simplicity's sake many times the policies get grouped, so most applications can read anything in /lib. By involving package management we can ask the question: "Why should they?" and still keep coming up with the answer simple.

Though, this would probably be better implemented as a plug-able security module rather than a FUSE filesystem. Implementation decisions go to the person doing the implementation.


posted Oct 10, 2007 | permanent link