why unix | RBL service | netrs | please | ripcalc | linescroll
hosted services

hosted services


Perforce have designed a very fast and efficient source control system. The system requires a central (or decentralised) server that the clients communicate with over the network.

Perforce can be downloaded from their downloads page. If you want the most basic setup then you will need to only download the Helix Server and the command line client for your operating system.

command overview

From the user perspective all commands will be issued with the p4 binary.

  1. To add files to the depot you would run p4 add document.txt
  2. To open files for edit you would run: 4 edit document.txt
  3. To delete files from the depot you would run: p4 delete document.txt
  4. To submit the above changes to the depot you would run: p4 submit [document.txt]
  5. To revert your work you would run: p4 revert [document.txt]


Integration is the method of taking one file and either copying it or merging it with another file.

Examples: If you wish to take file MAIN/source.c and merge it with DEV/source.c then you would run:

p4 integ MAIN/source.c DEV/source.c
p4 submit -d branch</pre>

The -d parameter in the submit command sets the changelist description to 'branch' when we submit.

If DEV/source.c does not exist already then the submit will not ask any resolution questions. If we now make a change to source.c and repeat the integration we will get a different response:

p4 edit MAIN/source.c
echo line >> MAIN/source.c
p4 submit -d edit
p4 integ MAIN/source.c DEV/source.c
p4 submit -d 'merge'
Merges still pending -- use 'resolve' to merge files.
Submit failed -- fix problems above then use 'p4 submit -c 4'.

This is telling us that there is a conflict between MAIN/source.c and DEV/source.c as there has been an edit. This is easily resolved though, and we can use an automatic resolve:

p4 resolve -am
/home/project/DEV/source.c - merging //depot/MAIN/source.c#2
Diff chunks: 0 yours + 1 theirs + 0 both + 0 conflicting
//desktower/DEV/source.c - copy from //depot/MAIN/source.c

The -am flag tells resolve to do automatic resolutions if possible, then we submit the pending changelist:

p4 submit -c 4

To prove this worked we will ask the depot to run a grep for us of all the files that are named source.c to match all lines (.):

p4 grep -e . .../source.c
//depot/DEV/source.c#2:Some text
//depot/MAIN/source.c#2:Some text

On occasion automatic merge may not do what you want, when this happens you will need to either accept theirs or accept yours. It is important to understand the perspective in his situation.

If you run:

           ,--------------------------------------------- source
          |               ,------------------------------ target
          |              |
          |              |
          v              v
p4 integ MAIN/source.c DEV/source.c 
//depot/DEV/source.c#2 - integrate from //depot/MAIN/source.c#2
    ^                                          ^
    |                                          |
    |                                          `--------- theirs
    `---------------------------------------------------- yours

Theirs will be the file in the depot, whilst yours will be the file in the client workspace.

This can be inspected with the diff command. At the resolve prompt, you can enter dt or dy, diff theirs or diff yours, respectively. This can be followed by at and ay, accept theirs, or accept yours, respectively. Optionally you may am, to accept merged.


p4 can accept a list of files on standard input, just provide the standard input as a file, or pipe:

p4 -x /tmp/list add
egrep '^.+' /tmp/list | p4 -x - add

In the first example, we used a file, in the second we sent only lines that had data to p4 for add.

rcs keywords

Love them, or hate them, they have their use. Personally, I like to use these in headers or footers to show the changelist number or revision of the file. This is easily done through the following commands:

p4 edit -t text+k MAIN/source.c
p4 submit -d keywords MAIN/source.c


Central servers are very efficient, the effort of DB management is mostly cached at the server, if all clients were to do this then they would quickly find that there is contention for memory between DB file cache and the human work applications. Therefore, it is better for a single computer to cache the DB than for every human to cache it. However, there are some situations, such as portable working where it makes sense for individuals to keep their own local server.

To get started with DVCS, you need to do the following at your server to allow clients to fetch and push:

p4 configure set server.allowfetch=2
For server 'any', configuration variable 'server.allowfetch' set to '2'
p4 configure set server.allowpush=2
For server 'any', configuration variable 'server.allowpush' set to '2'

Then, you can configure a local de-centalised depot with the same case sensitivity and unicode awareness as your centralised depot:

p4 init -p P4PORT

In this case, P4PORT is your existing depot MASTER. You then need to run the following to configure the MASTER as a remote:

p4 remote origin
Address:    perforce.example.com:1666
    //stream/main/... //...

Once you've done this, you are free to fetch a copy of your central depot:

p4 fetch

It is possible to configure multiple 'remotes' and specify each using the '-r' flag with the remote name. We named ours 'origin' as this is the convenient default.


Of course, if you wish, you can setup a local DVCS using 'clone' if your master server has a remote spec configured already.

master$ p4 remote origin

Replace //... //... with //stream/main/... //... and set the Address: to the network address of your master, then:

p4 clone -p perforce.example.com:1666 -r origin

Once this is complete you will have the depot contents (unless the remote spec restricts this to a subset).

You may note, that when you ran p4 init it created a .p4config file in the root directory. Each line in the .p4config file sets an environment variable. If you inspect this you may note the following line:

P4PORT=rsh:/bin/sh -c "umask 077 && exec p4d -i -J off -r '$configdir/.p4root'"

This is a clever way of starting the Perforce server 'p4d' upon request. rsh in the P4PORT environment tells the p4 client to spawn the /bin/sh shell. Normally you would see tcp or ssl followed by an IP and TCP port address here, but rsh works to execute a command. The other parameters here set the file umask and then execute the p4d server with a set of parameters.

Your workspace client will need to be configured to take account of the remote depot stream. The following would match what we have done already:

    //stream/main/... //[client name]/...

Once added update your local files with p4 sync //.... You now have a local copy of your central depot to work with.


A useful feature of working with DVCS is the ability to un-submit work prior to pushing to the master depot server. This is accomplished in the following way:

  1. edit and submit files locally
  2. 'unsubmit' by running p4 unsubmit //...[files]#rev
  3. unshelve the changelists associated with the above command
  4. resolve any conflicts
  5. delete shelved files and changelists by running: p4 shelve -d -f -c [changelist] p4 change -d [changelist]
  6. repeat above steps, excluding step one as necessary
  7. submit a single changelist as a resolve of the merged edits
  8. push the changes to the master depot by running p4 push

Resubmitting is a real benefit if you wish to keep your depot pristine. If you are happy to work locally and are confident that your system has good local storage, has a reliable network (so you can get to your files from home and office) then DVCS could be a solution that works for you. There is another situation when DVCS is helpful and that is when network connectivity is so poor that you need a local copy.

In fact, resubmitting is such a useful feature that I often create a DVCS depot in /dev/shm (ramdisk) for a short period of time whilst I practice with a new idea before submitting it to the master. To do this I have an alias setup as follows:

cd $( mktemp -d -p /dev/shm ) \
    && p4 init -p master:1666 \
    && p4 remote -o origin | sed -e 's|^\s\+//...|\t//stream/main/...|g' | p4 remote -i \
    && p4 fetch \
    && p4 client -o | p4 client -i

Taylor paths to suit your needs, otherwise you will clone the master in full on a potentially regular basis.


Now, for something completely different.

As you may have noticed when you p4 [branch|remote|submit|job|...] {P4,}EDITOR will be executed with a file to edit. If you use vim, it is often helpful to set the following to do some automatic tasks for you when this file is edited:

autocmd BufRead /tmp/* call Perforce()
function Perforce()
        if getline(1) =~ 'Perforce'
        set tw=72 fo+=aw
        call search( "^Description:" )
                normal jl

Here we set a handler for all files edited in /tmp (which is where the temporary form will be stored for edit). The handler looks at the first line of the file, if it matches 'Perforce' then the column width is set to 72, then automatic paragraph formatting and trailing white space are are appended to the format options. A search for 'Description' is then done, the cursor is then moved down one line and to over one character to set it ready for input at the description.

things you cannot do

One of the things that you cannot do with Perforce is store a file that contains ... (ascii elipsis) in the file name. As a [qmail (qmail.html) enthusiast I like using .qmail files for mail rules, sometimes spam harvesters pick up mail addresses that have been condensed and are presented as [realname]-initial...@example.com. It's not possible to store a .qmail-initial... file in this way. Alternatives are:

  1. create a maildrop mailfilter rule This adds processing to the rule mailfilter, requires invoking maildrop per delivery, processing all the rules within, does not scale
  2. manage it via an alternative method, such as a package which expands This does not scale well either and requires more stages in configuration management and creates a split between Perforce managed and package management of the same file set


  1. readlink: Invalid argument Change 32590 created with 1 open file(s). Submitting change 32590. Locking 1 files ... edit //depot/path/to/remote/file#2 readlink: /path/to/local/file: Invalid argument Submit aborted -- fix problems then use 'p4 submit -c 32590'. Some file(s) could not be transferred from client.

The solution to this is to reopen the file, p4 reopen -t text+k, to remove the symlink attribute, here I'm using text+k as that was appropriate for what I was doing at the time, but text or binary would also have worked. This will then allow you to submit the change list using p4 submit -c 32590, for example.


This has nothing to do with DJB's tinydns database, that is something different that uses the extremely fast read-only CDB file format.

I once had the idea that it would be more efficient to store some smaller files in the meta database through specifying the +T file type which indicates 'tiny.db'. If you get the same idea that I had, which is that the file reads will be quicker since the OS has already cached the file inode and perhaps does less work to locate the storage blocks.

For example:

p4 add -t +T hosts

Adding data to the meta database should be done with some caution, there is a tipping point where it is less efficient storing data in the tiny.db, I estimate this to be around 10k. Smaller source and HTML files are appropriate here.

To back the tiny.db file up you will need to add the following to your backup process. This will not be part of the same atomic operation when you run a checkpoint/journal rotate, so you will need to decide for yourself if you wish this to happen before or after the journal/checkpoint operation.

p4d -r . -xf 857 &gt;tiny.db.chk