[lug] Fwd: Simple counter?

Anthony Foiani tkil at scrye.com
Sat May 25 12:06:27 MDT 2013

Lori R <lightningrose at gmail.com> writes:

> I've not used atomics, but a mutex would be the usual solution for a
> thread-safe implementation in a single process, and I also inferred
> that Jeff, the OP, wanted something that would work with multiple
> processes.

Atomics are generally faster, as they only have to synchronize access
to a single value; mutexes are defined to synchronize all program

I use both in my $dayjob project, and pthread_mutex_lock shows up
quite a bit on my "perf top".

In C++, the mutex approach is something like this (using boost,
although quite a few of these things got into 'std::' in C++11):

  boost::mutex mutex;
  int global_value = 0; // protected by mutex
  int value;
      boost::unique_lock lock( mutex );
      value = global_value;

That's all wrappers around pthread_... calls, but having it in C++
allows for exceptions (so I don't need to check return calls all the
time) and destructors (so I don't need to remember to unlock manually,

The equivalent with atomics is something like:

  std::atomic< int > global_value;
  int value = global_value;

The latter will likely be much faster on most CPUs.  (And on CPUs
where they don't have locked operations, the implementation probably
devolves to a mutex anyway.)

There are options for getting even more performance out of atomics, if
you know exactly what sort of memory model you need to be dealing
with.  I've not yet needed to go there, though.

> My solution would be to use a text file for persistence, and a
> simple program [... see below ...] that uses semaphores (you could
> think of semaphores as system wide mutexs) to coordinate access to
> the file.  This solution assumes a local file mounted on a single
> computer. 

Hm.  Can you think of a situation where the semaphore would be more
resilient than advisory locking?  They're both voluntary.

> For multiple nodes in a network, the earlier suggestion of using a
> database may be the easiest solution.

Or some sort of cluster election algorithm.  (Databases are one more
layer on top of such a thing, especially if you have to deal with high
availability / avoiding single points of failure.)

> (without gotos ;))

Why the hate for 'goto'?  So far as I know, it's fairly accepted
practice for doing staged cleanup in langauges that lack stack
unwinding and destructors.

Without using 'goto' in my program I'd either have to replicate all
required cleanup at each stage, or call a new function as soon as I
had another value to clean up.

For what it's worth, this use of 'goto' is very widely used in the
linux kernel, and it's even in the documentation as a preferred

  (or: http://preview.tinyurl.com/pef2bdg )

Where they recommend constructs that look like so:

  if (dma_set_mask(dev, DMA_BIT_MASK(32))) {
                 "mydev: No suitable DMA available.\n");
          goto ignore_this_device;

As a final example -- and I'm honestly not sure whether it's for or
against -- I did some work with crypto tokens recently.  There's a
huge amount of context that needs to be cleaned up in the correct

The full source can be found here:

  (or: http://preview.tinyurl.com/pzosq7w )

And the cleanup bits look like this:

      CMS_ContentInfo_free( ci );

      /* these certs are actually "owned" by the libp11 code, and are
       * presumably freed with the slot or context. */
      sk_X509_free( extra_certs );

      PKCS11_release_all_slots( p11_ctx, p11_slots, num_p11_slots );

      PKCS11_CTX_unload( p11_ctx );

      PKCS11_CTX_free( p11_ctx );

      EVP_PKEY_free( key );

      ENGINE_free( pkcs11 );

      ENGINE_free( dyn );

      BIO_vfree( out_sig_file );

      BIO_vfree( in_data_file );

      ERR_print_errors_fp( stderr );

      ERR_remove_state( /* pid= */ 0 );
      CONF_modules_unload( /* all= */ 1 );

      return exit_code;

Can you suggest a cleaner way to do that in plain C?

More information about the LUG mailing list