In article , Chris Isbell
wrote:
On Sun, 07 Feb 2010 12:42:53 +0000 (GMT), Jim Lesurf
wrote:
Must admit that I have long been used to 'int' being signed 32 bit by
default. So I was surprised more recently to see how common it was for
x86 GCC and Linux, Doze, etc, to tend to use 16 bit by default. With
GCC I use 'long' (32 bit) for any audio values that are integers. (And
I have always used 'double' for floating points.)
From memory (my copy of K&R is at work), int must be at least sixteen
bits and long at least thirty-two bits.
Yes, page 36 in my copy (2nd edition).
When people assume 16 bits for int values it causes portability
problems. I have quite a lot of experience of this from developing
software for sixteen-bit microcontrollers.
Overally, I largely jumped over that, I guess. Went directly from BBC B etc
to 32/26 bit ARM.
However the curious thing for me is that - IIUC - GCC assumes 16 bits for
'int' even on machines with 32 bit CPUs. That clashes with what K&R wrote
higher up the same page as above - that the choice for 'int' should be what
is natural for the machine. But I only recently noticed this quirk when I
started writing Linux apps to process audio data files.
That also caused me to use my first 'unsigned long' to cope with handling
the size of a very large LPCM file! Using 'long' returned the result that
the WAV file had a negative payload! 8-]
So it does seem plausible that the 'clips at -6dB for mono' effect that was
reported was due to inappropriate use of 16bit ints. As you say, maybe
because the same source code works OK on other platforms this has passed
unnoticed by the writers.
Personally, it tends to comfirm my own preference for writing my own code
when I can - albeit making my own mistakes in the process! :-)
Slainte,
Jim
--
Please use the address on the audiomisc page if you wish to email me.
Electronics
http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio
http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc
http://www.audiomisc.co.uk/index.html