Centralized Package Manager


We were managing increasing number of servers and our strategy was (for increased security) to not allow servers to "pull" any critical information. Instead we prefered "push" technique. That of course lead us to abandonment of "typical" tools like yum, up2date etc.

That was how CPacMan was born.

CPacMan initially was just a bunch of Bash/Perl scripts to pushd software to servers but with time complexity of our infrastructure grew, RedHat (our platform of choice) ditched perl bindings for RPM, so CPacMan was rewritten in Python.

How does it work

As it was explained in "Intro" we "push" things out to servers whenever we think is appropriate.

Reasonable question would be "why not automate errata deployment on servers?". The answer is: because on more than one occasion we've seen patches from vendor breaking functionality of software on our systems. And to clarify - it wasn't vendor's fault that our developers were using bugs in software as "features" and when vendor fixed bugs - software stopped behaving as expected.

Staged approach

Server classes

CPacMan allows (but doesn't force you to) implement staged approach: surely you've got development/testing/production servers. Now within CPacMan framework you can define those 3 classes so that packages deployed in testing have no way of landing in development and production unless you "promote" them. So your packages go through all the stages - from development to testing and into production.

Server grouping

You can also group servers using metadata fields within server configuration files. You can define errata deployment stages similar to strategy described in "Server classes". You define "update levels" and collect servers from each level and deploy updates there, see if everything keeps working as expected and escalate it to next level until you reach production.

Deployment consistancy

Since CPacMan works with static repositories you can guarantee that your deployments are consistant across all machines. Meaning that if you used staged approach - your servers will receive same set of patches unless you've modified repositories in between deployments, and even then there are ways to make sure your deployments are consistant excluding some repositories from autoupdate procedure.

You can still use Yum, up2date etc.

CPacMan doesn't eliminate usage of tools that synchronize you with upstream repositories. It even encourages it. Since CPacMan works only with repositories on local hard drive - it is your job to fetch appropriate patches etc. for your machines. Once you get those you can use cpacman to calculate updates for your servers.

Work around bugs in up2date

up2date is somewhat "stupid" when it comes to packages in different arch's. For example we were deploying JPackage Java packages which all (or most of them) have "noarch", however up2date was insisting on installation of "i386" packages comming from RedHat. Not only would that generate conflicts on our end since packages are comming from different repositories and might be "incompatible" it was also quite confusing as to why "noarch" is suddenly replaced with "i386". CPacMan looks after that consistancy and will deploy only updates of the same arch.

Be platform independant

CPacMan was implemented as a generic package distribution system meaning that same instance of CPacMan would handle virtually any OS combinations. This functionality is not fully developed (we don't have access to systems other that RedHat) but framework is quite generic and should allow that if needed.

However if you're dealing with OS'es using RPM as their packagin system - CPacMan shouldn't have trouble managing thouse "out-of-the-box". You can support RHL7.x and RHEL.AS{3,4,5} from the same box which gives you advantage of being able to overlook and control all those systems consistantly and without much trouble.