Audio Banter

Audio Banter (https://www.audiobanter.co.uk/forum.php)
-   uk.rec.audio (General Audio and Hi-Fi) (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/)
-   -   Couple of cd queries, model numbers later (https://www.audiobanter.co.uk/uk-rec-audio-general-audio/8962-couple-cd-queries-model-numbers.html)

Brian Gaff February 4th 16 08:10 AM

Model numbers and a new description of fault.. was Couple of cd queries, model numbers later
 
and no doubt one of his more recent mains plugs.
I'd dispute the supermarket cd player sounding better, as they do tend to
sound brash and bright, which is not how this Panasonic sounds at all, its
detailed has good dynamic range and avoids the tendency to gurgle subtly on
strings that occasionally comes on the Marantz.
Brian

"Eiron" wrote in message
...
On 02/02/2016 12:13, Brian Gaff wrote:
Marantz
CD6000 Ose
Has issues now with cdrs particularly detecting them and track starts
manually selected unless selected by going backwards through the disc.
Lens cleaned with only marginal improvement.
Dropouts on cdrws.

Panasonic
DVD s500
Has poor software when used as a cd player.
It does not seem to allow gap free playing of continuous cds with track
markers. Acts like its doing track at once rather then disc at once if
we are talking recording, but this is on playback. Seems its a firmware
issue from new.
Wondered if anyone knew if it was updated via a cd or something.
it was very cheap so cannot really complain. it has a wonderful sound on
cds though, better than the Marantz.


Just get another twenty quid DVD player from the supermarket.
That will play CDs, CD-Rs and CD-RWs properly with a wonderful sound,
better than a Marantz OSE. Though if you want it better than a KI
Signature,
you'll need a Russ Andrews SCART to phono audio interconnect. :-)

--
Eiron.


--
----- -
This newsgroup posting comes to you directly from...
The Sofa of Brian Gaff...

Blind user, so no pictures please!


Brian Gaff February 4th 16 08:15 AM

Model numbers and a new description of fault.. was Couple of cd queries, model numbers later
 
They most certainly do not sound the same. I think much of the problems is
in the error correction and the later analogue circuits. Some sound dull and
a bit like some fm tuners with over zealous mpx filters that phase shift
like mad.

Bit like when Eurovision used to come via analogue land lines.

This panasonic even plays some of the very early first generation AAD
Philips cds better than I've heard them. No harsh gritty bits, though of
course some still lack deep bass as its just not on the disc. However when
I play really good discs such as the early Telarc ones its amazing if only
it actually played them without gaps!
Brian

"Dave Plowman (News)" wrote in message
...
In article ,
Eiron wrote:
Just get another twenty quid DVD player from the supermarket. That will
play CDs, CD-Rs and CD-RWs properly with a wonderful sound, better than
a Marantz OSE. Though if you want it better than a KI Signature, you'll
need a Russ Andrews SCART to phono audio interconnect. :-)


You might find it difficult to find one which gives the usual CD
facilities like showing which track it's playing etc without being
connected to a TV screen. And might be remote control only. Oh - a phono
output could be considered an essential too, although you could derive it
from a SCART.

And I've never been convinced all CD players sound the same...

--
*If you don't pay your exorcist you get repossessed.*

Dave Plowman London SW
To e-mail, change noise into sound.


--
----- -
This newsgroup posting comes to you directly from...
The Sofa of Brian Gaff...

Blind user, so no pictures please!


Jim Lesurf[_2_] February 4th 16 09:02 AM

Couple of cd queries, model numbers later
 
In article , Bob Latham
wrote:

In addition, what my I ask is a gapless recording? Never heard of one of
those before. Gapless is a playback issue for the player nothing to do
with either the UPnP server or the flac files provided they've been
ripped correctly.


I must admit that I was puzzled when I first encountered people reporting
file replay with 'gaps'. I can't decide if this happens because of:

1) Deliberate choice of the player desiger who assumed all users would
*want* gaps between files/tracks because they'd all be a serious of pop
singles.

2) Due to the player's buffer system. In effect, playing the material as if
it filled an integer number of buffer fills. Then when it doesn't adding
silence from the end of the last - underfilled - buffer.

3) Taking ages to find and start playing the next file.

(1) seems like idiocy or lazyness. Anthing like this would be OK as a user
*option*, but not as an imposed default.

(3) shouldn't happen these days. Systems should be quick enough. Given a
decent bufferring arrangement the start of the next file should be found
and loaded ready in time.

(2) seems like the kind of amateur programming I'd do! Not what I'd expect
from a serious programmer.

So which is it - or is it something else?

I ripped some ancient CD-R recordings I made ages ago using some very
elementary software of the period. Some of these showed 'track at once'
problems where the writing software had added needless 2-sec bursts of
silence between tracks. But I've not seen any software that couldn't do
'disc at once' without this in well over a decade. I'd have hopes that
modern programmers wouldn't make such errors.

Jim

--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html


Bill Taylor[_2_] February 4th 16 12:27 PM

Couple of cd queries, model numbers later
 
On Thu, 04 Feb 2016 10:02:10 +0000 (GMT), Jim Lesurf
wrote:

In article , Bob Latham
wrote:

In addition, what my I ask is a gapless recording? Never heard of one of
those before. Gapless is a playback issue for the player nothing to do
with either the UPnP server or the flac files provided they've been
ripped correctly.


I must admit that I was puzzled when I first encountered people reporting
file replay with 'gaps'. I can't decide if this happens because of:

1) Deliberate choice of the player desiger who assumed all users would
*want* gaps between files/tracks because they'd all be a serious of pop
singles.

2) Due to the player's buffer system. In effect, playing the material as if
it filled an integer number of buffer fills. Then when it doesn't adding
silence from the end of the last - underfilled - buffer.

3) Taking ages to find and start playing the next file.

(1) seems like idiocy or lazyness. Anthing like this would be OK as a user
*option*, but not as an imposed default.

(3) shouldn't happen these days. Systems should be quick enough. Given a
decent bufferring arrangement the start of the next file should be found
and loaded ready in time.

(2) seems like the kind of amateur programming I'd do! Not what I'd expect
from a serious programmer.

So which is it - or is it something else?

I ripped some ancient CD-R recordings I made ages ago using some very
elementary software of the period. Some of these showed 'track at once'
problems where the writing software had added needless 2-sec bursts of
silence between tracks. But I've not seen any software that couldn't do
'disc at once' without this in well over a decade. I'd have hopes that
modern programmers wouldn't make such errors.

Jim


It may be due to the variability of DLNA(UPNP) implementations.

Apparently a renderer should support a characteristic called
SetNextAVTransportURI if it is to play gaplessly when files are pushed
to it by a controlle and not all of them do, as it's an optional
feature of DLNA.

Jim Lesurf[_2_] February 4th 16 01:08 PM

Couple of cd queries, model numbers later
 
In article , Bill Taylor
wrote:

It may be due to the variability of DLNA(UPNP) implementations.


Ah. Interesting...

Apparently a renderer should support a characteristic called
SetNextAVTransportURI if it is to play gaplessly when files are pushed
to it by a controlle and not all of them do, as it's an optional feature
of DLNA.


OK. I've used and don't bother with DLNA/uPnP. I just play files using
standard filers, etc. Seems an odd trap for items that say they work via
DLNA, etc, to fall into. But not something I'd encounter.

Jim

--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html


Jim Lesurf[_2_] February 4th 16 01:22 PM

Couple of cd queries, model numbers later
 
In article , Bob Latham
wrote:

I must admit that I was puzzled when I first encountered people
reporting file replay with 'gaps'. I can't decide if this happens
because of:


I had always thought it was the players inability to open two files
simultaneously and that the buffer size in the player was insufficient
to sustain music playback whilst one file is closed and another opened
and read.


It shouldn't matter if the player can't open two files overlappingly for
reading in. The key requirement is to have a buffered playout that it can
keep refilling and giving to the output before the previous buffer fill(s)
has/have been 'used up'. Indeed the whole point of buffering systems is to
give the player a chance to keep up and avoid 'gaps' the output stream.

There are various ways to present this to the player. But in general they
should give it somewhere to write the next lot of data and 'send' it long
before the previous data it has sent has all been played out. Given the
speeds of modern machines it shouldn't be a problem if the player is
designed to handle it. Matter of careful programming.


There can be silence added to the end of tracks at the time of recording
but that is to give an intentional gap between tracks. Nothing to do
with gapless playback as the silence is intentional by the record
company and the track is "playing" during the silence.


Yes. From what Bill wrote it may be something else that's causing the
problem. Afraid I know zero about DLNA, etc. Just how standard filer and
buffer methods can work as a technique.

Jim

--
Please use the address on the audiomisc page if you wish to email me.
Electronics http://www.st-and.ac.uk/~www_pa/Scot...o/electron.htm
Armstrong Audio http://www.audiomisc.co.uk/Armstrong/armstrong.html
Audio Misc http://www.audiomisc.co.uk/index.html


Johnny B Good February 5th 16 01:20 AM

Couple of cd queries, model numbers later
 
On Wed, 03 Feb 2016 20:05:25 +0000, RJH wrote:

On 03/02/2016 04:47, Johnny B Good wrote:
On Sun, 24 Jan 2016 09:48:15 +0000, RJH wrote:

On 21/01/2016 22:03, Johnny B Good wrote:
On Thu, 21 Jan 2016 06:17:48 +0000, Bob Latham wrote:

In article ,
Johnny B Good wrote:

Ouch! or Yikes! How often do you upgrade or swap out failing
disk
drives, I wonder?

I have 3 NAS boxes, one of them off site. The oldest is from 2010
and none of them has ever given any indication of a problem with
their hard drive. Rightly or wrongly I use Western Digital REDS.

Rightly, imo, provided you've addressed the 8 second head unload
timeout
issue (which the lack of failure of the oldest drive could imply
except I don't know whether this is simply because you're only
spinning them for just a few hours per day).

As long as you steer clear of the Seagate rubbish, you shouldn't
suffer
too many problems especially if you check the SMART stats every other
week or so and don't *just* rely on smartmonctrl sending you an email
about imminent failure. :-)


I've read your posts on the unreliability of HDs, and (lack of) wisdom
in allowing systems to 'sleep'.

I'm afraid I simply don't follow a lot of what you say, and have
relied on buying what seem to be be decent brands - WD Reds for my
last upgrade a couple of years' back. I let the system sleep -
basically because it's not that accessible (in a cellar), is not used
anything like 24/7 - maybe 4 hours/day on average, and the electricity
savings seem worthwhile.

I use the old disks (2TB WD-somethings I think, in the old NAS box)
for backup. I've not had a single failure - but then maybe I've been
lucky.


Apologies for the late response, real life, such as it is, got in the
way.


Not a problem!

There's no hard and fast rule regarding the use of spin down power
saving in a SoHo or home NAS box but, unless you're really only making
infrequent use of the NAS, it's always best to avoid lots of spin up
events per day (most home desktop PCs are typically power cycled just
one or two times a day which keeps the spin up event count nice and
low, assuming that distraction known as spin down power saving in the
OS has been completely disabled in order to preserve the operator's
sanity).

It's worth keeping in mind that this is a *power saving* feature (in
reality, an energy consumption saving strategy) with no thought to
whatever consequences there might be in regard of the drive's
reliability. Seagate must be the only drive manufacturer stupid enough
to confuse power saving with temperature reduction if their FreeAgent
'specials' were anything to go by.

Spinning down a modern HDD typically reduces power consumption by
around
7 to 10 watts per drive as observed in the energy consumed at the mains
socket. Each watt year of energy consumed equates to about a quid's
worth on the annual electricity bill. That represents 8.766 KWH units
of electricity used per year. You can check your actual unit costs and
calculate a more exact annual cost per watt's worth of 24/7
consumption.

If you're running the NAS 24/7 and just using spin down power saving
to
minimise its running expenses, you can estimate just how much of a
saving this contributes by calculating the hours of spin down 'sleep'
time each drive enjoys per day. For example, a pair of drives allowed
to 'sleep' overnight may get anywhere from 8 to 16 hours of repose per
day, depending on how often you access the files on the NAS box and the
timeout period you've selected before the drives spin down.

For arguments sake, I'll assume an average of 12 hours per day of
spin
down sleep for both drives and an effective energy saving at the socket
of 10 watts each, 20 watts in total making for a saving of 240 watt
hours per day. this represents a total of 87.66 units of electrical
consumption saved over the year. Assuming 15p per unit, this would
represent £13.15 savings on the yearly electricity bill.

This doesn't strike me as a worthy enough saving to place the drives
under the additional thermal cycling stresses introduced by such a
power saving strategy. However,in the case of a four drive setup, the
savings would be double that and look a more attractive proposition (at
£26.30 a year). In my opinion, that's still not enough to justify such
a strategy but I'm not you and you may feel differently about the added
risk factor. Also, your usage pattern may allow for an even longer
(unbroken) 'sleep' period per day and your electricity costs may be
higher than the 'ball park' 15 pence a unit figure I trotted out.


More than happy to accept those figures. But how do you know this
'thermal cycling' is so damaging?


I know because, barring silly manufacturing defects or system design
errors that expose the silicon to electrical stresses beyond their design
limits, thermal expansion/contraction introduces mechanical cycling
fatigue induced stresses on the silicon die as well as in circuit board
through plated holes.

Apart from when the silicon is being run right on its upper temperature
limit (125 deg C) where today's modern silicon chips are prey to an
effect known as electro-migration(?), this thermal cycling effect is the
prime cause of post infant mortality failure in the HDD controller system.

Modern HDDs over at least the past decade subject the spindle motor and
its drive electronics to far less startup stress than the drives of old
which could subject the PSU, motor windings and electronics to as much as
4 to 5 times the on-speed current demand (which is why the spin up time
was only a matter of 3 or 4 seconds as opposed to the 10 to 12 seconds it
takes with a modern drive on account the startup current is limited to a
mere 1.5 to 2 times the on-speed current - kinder all round on both the
drive and the PSU).

The fact that the google stats showed only a weak correlation between
failure rates and temperature (other than for right up to the extreme
limit) on drives spinning 24/7 strongly suggests that it's thermal
cycling rather than absolute temperature that contributes to high failure
rates. The problem is, there doesn't seem to be any published test data
on the effects of such thermal cycling (at least not in the case of
commodity HDDs as used in desktop PCs).

Googling "effects of thermal cycling on silicon chips" throws up plenty
of research publications in this particular field which suggests that
such thermal cycling effects are an important consideration in the
service life of micro-electronic components.

Sadly, googling "hdd spin down life rating figures" and variations of
this phrase in the hopes of being taken directly to a manufacturer's spec
sheet (or an article with such links) only produced discussions in
various web fora on the pros and cons of spin down power saving where the
only 'nuggets' were ill informed opinion best described as "Pearls of Wiz-
Dumb"


One way to minimise spin down cycles is to choose a long time out
period. When I toyed with spin down power saving I chose a 3 hour
timeout to 'sleep' on the basis that during the day the drives would
remain spun up and only after I taken to my bed would the drives
finally spin down for maybe as much as 8 hours worth of 'sleep',
effectively no worse than if they'd been used in a desktop PC being
power cycled once or twice a day without any spin down power saving to
stress them (or me) any further.

In my case, the savings on all four drives only amounted to some 28
watts and I soon decided the potential savings in my case weren't
enough to justify the extra stress of even an additional one or two
spin down cycles per day for the sake of letting them sleep for just 6
to 8 hours per night (I'm generally using the desktop PC for around 16
hours per day which is often left running 24/7). Assuming an average
'sleep' time per day of 8 hours this would represent a mere £12.27 a
year, assuming 15p per unit cost (I can't recall the actual unit cost
offhand).


Now, I have looked at that - and changed the spin-down triggers to 1
hour.


That seems a more reasonable compromise between MSFT's choice of 20
minutes and my own of 2 or 3 hours. The ideal to aim for is to set it so
it doesn't spin down (too often) during your daily sessions at the
computer but does spin down when you're safely tucked up in your bed.

When you mentioned a 4 hour per day figure of usage, it's not clear
whether this was a single 4 hour session or just an estimate over a
longer 8 to 16 hour period. If you were talking about single 4 hour
session, that one hour spin down time out should certainly do the trick.


You have mentioned unofficial firmware patches in the past - and I'm not
too happy with that, must say.


I'm afraid you've lost me there. I've *never* recommended unofficial
firmware patches... *ever*! I've certainly recommended the use of Western
Digital's own officially sanctioned WDIDLE3 tool to increase the head
parking time out from its insanely short 8 second default to a more
useful 300 seconds value (and acknowledged the existence of *nix
equivilents used by the Linux and BSD fraternity). Perhaps it was my
mention of the *nix version of WDIDLE3 that you are referring to?


Whatever the actual savings figure proved to be, it didn't strike me
as
enough justification to subject the drives to any spin down cycling at
all so I gave up on the idea of chasing after such savings, especially
as I was burning up some 70 odd quid's worth in electricty per year
just keeping my collection of UPSes powered up.

I was able to save 20 quid a year alone just by decommissioning a
SmartUPS700. Now, the only UPS maintenance loads I have are the
BackUPS500's 3 watts load (protecting the NAS box) and the 7 or 8 watts
of an ancient Emerson30 450VA rated UPS which sits in the basement
'protecting' the VM Superhub II cable modem/router with what I suspect
is a well cooked set of 7AH SLAs which wouldn't last 5 seconds should
the mains disappear unexpectedly (I really ought to check it out one of
these days).

Bearing in mind what I was already spending to protect against an
event
that last occurred over quarter of a century ago, you can well
understand my reluctance to increase the risk (even if only slight) of
premature disk failure for the sake of a saving that was a mere
fraction of what I was already squandering on UPS maintenance costs.

If you can optimise the spin down power saving time out period to
keep
the average spin up cycles per day below 5 or 6 (you can check this in
the SMART logs) and still accumulate enough spin down sleep hours to
make a worthwhile saving, then go for it otherwise you might be better
off avoiding spin down power saving altogether. It's hard to know where
the 'tipping point' between unwarranted risk and useful energy savings
lies with such a strategy.

My guess (and it's only a guess) would be no more than 5 or 6 a day
on
average. I think a close look at the more detailed specs on the hard
drive might offer up a clue in terms of the maximum spin down cycles
lifetime rating which the manufacturer may or may not have opted to
publish. If you can't find such a figure for the models of drives
you're actually using, you can always look for such a figure for *any*
model *or* make to get some idea of at least one manufacturer's take on
this particular aspect of drive reliability. I think I may even have
seen such a figure but I can't recall which brand or model or even what
the figure was - It would have held no interest for me seeing as how I
was avoiding spin down for reasons beyond the matter of mere power
saving.


Nothing of mine is that critical. In fact, I'm beginning to wonder if
90% of my data is actually required. Just photos and documents (which
are also cloud stored). Most of the rest (music and video) I could
download, or call on friends for their copies. I'd need a database,
obviously.

So while my reasons are not that thought through, the consequences of
total loss are not that serious.

I think what you're saying is that potential problems are easily
avoided, but I'm afraid I'm stuck thinking that the failure event is
statistically unlikely, and the energy/money saving is worthwhile.

Not knocking you - just saying!


Well, of course, only you know what is best for your particular
scenario. I was simply pointing out that such power saving strategies are
usually not in tune with a strategy based on reliability considerations.

I suppose, if the drives aren't running particularly hot and only go
through a modest number of spin up cycles per day, there probably isn't
very much in it (perhaps the difference between getting 4 years rather
than 5 years of service life which becomes a bit academic if you're
planning on replacing them every 2 or 3 years anyway).

As you've mentioned, reliability is very much a matter of statistical
probability. As long as you're prepared to deal with any sudden disk
failure, you're in the same boat as the rest of us. Unless your data
storage needs are quite modest, even the cheapest backup strategy
(another set of HDDs) is still a significant extra investment over and
above the bare NAS box on its own (and no, RAID is not (and never has
been) a substitute for a proper backup strategy).

--
Johnny B Good

Bill Taylor[_2_] February 5th 16 07:57 AM

Couple of cd queries, model numbers later
 
On Fri, 05 Feb 2016 08:33:51 +0000 (GMT), Bob Latham
wrote:

In article ,
Jim Lesurf wrote:
In article , Bob Latham
wrote:


I had always thought it was the players inability to open two files
simultaneously and that the buffer size in the player was insufficient
to sustain music playback whilst one file is closed and another opened
and read.


It shouldn't matter if the player can't open two files overlappingly for
reading in. The key requirement is to have a buffered playout that it
can keep refilling and giving to the output before the previous buffer
fill(s) has/have been 'used up'. Indeed the whole point of buffering
systems is to give the player a chance to keep up and avoid 'gaps' the
output stream.


There are various ways to present this to the player. But in general
they should give it somewhere to write the next lot of data and 'send'
it long before the previous data it has sent has all been played out.
Given the speeds of modern machines it shouldn't be a problem if the
player is designed to handle it. Matter of careful programming.


I can't really see how that differs significantly from my comment of
"the buffer size in the player was insufficient to sustain music playback
whilst ....." but anyway.

There can be silence added to the end of tracks at the time of
recording but that is to give an intentional gap between tracks.
Nothing to do with gapless playback as the silence is intentional by
the record company and the track is "playing" during the silence.


Yes. From what Bill wrote it may be something else that's causing the
problem. Afraid I know zero about DLNA, etc. Just how standard filer and
buffer methods can work as a technique.


It may well indeed be that but in that case it is poor code in the player
that is causing the issue and not DLNA/UPnP which I can assure you does
not cause any gapless problems.

Bob.


That's a bit of a phiosophical question.

Is a player that complies with the basic DLNA spec but leads to
gapped playback more poorly coded than one that implements some of the
optional parts of the spec and plays back gaplessly?

I've more or less given up on DLNA mainly because of complete
inconsistebcy about gapless playback, but also because most of the
controllers in playback devices are absolutely terrible.

RJH[_4_] February 5th 16 08:00 AM

Couple of cd queries, model numbers later
 
On 05/02/2016 02:20, Johnny B Good wrote:
On Wed, 03 Feb 2016 20:05:25 +0000, RJH wrote:

On 03/02/2016 04:47, Johnny B Good wrote:
On Sun, 24 Jan 2016 09:48:15 +0000, RJH wrote:

On 21/01/2016 22:03, Johnny B Good wrote:
On Thu, 21 Jan 2016 06:17:48 +0000, Bob Latham wrote:

In article ,
Johnny B Good wrote:

Ouch! or Yikes! How often do you upgrade or swap out failing
disk
drives, I wonder?

I have 3 NAS boxes, one of them off site. The oldest is from 2010
and none of them has ever given any indication of a problem with
their hard drive. Rightly or wrongly I use Western Digital REDS.

Rightly, imo, provided you've addressed the 8 second head unload
timeout
issue (which the lack of failure of the oldest drive could imply
except I don't know whether this is simply because you're only
spinning them for just a few hours per day).

As long as you steer clear of the Seagate rubbish, you shouldn't
suffer
too many problems especially if you check the SMART stats every other
week or so and don't *just* rely on smartmonctrl sending you an email
about imminent failure. :-)


I've read your posts on the unreliability of HDs, and (lack of) wisdom
in allowing systems to 'sleep'.

I'm afraid I simply don't follow a lot of what you say, and have
relied on buying what seem to be be decent brands - WD Reds for my
last upgrade a couple of years' back. I let the system sleep -
basically because it's not that accessible (in a cellar), is not used
anything like 24/7 - maybe 4 hours/day on average, and the electricity
savings seem worthwhile.

I use the old disks (2TB WD-somethings I think, in the old NAS box)
for backup. I've not had a single failure - but then maybe I've been
lucky.

Apologies for the late response, real life, such as it is, got in the
way.


Not a problem!

There's no hard and fast rule regarding the use of spin down power
saving in a SoHo or home NAS box but, unless you're really only making
infrequent use of the NAS, it's always best to avoid lots of spin up
events per day (most home desktop PCs are typically power cycled just
one or two times a day which keeps the spin up event count nice and
low, assuming that distraction known as spin down power saving in the
OS has been completely disabled in order to preserve the operator's
sanity).

It's worth keeping in mind that this is a *power saving* feature (in
reality, an energy consumption saving strategy) with no thought to
whatever consequences there might be in regard of the drive's
reliability. Seagate must be the only drive manufacturer stupid enough
to confuse power saving with temperature reduction if their FreeAgent
'specials' were anything to go by.

Spinning down a modern HDD typically reduces power consumption by
around
7 to 10 watts per drive as observed in the energy consumed at the mains
socket. Each watt year of energy consumed equates to about a quid's
worth on the annual electricity bill. That represents 8.766 KWH units
of electricity used per year. You can check your actual unit costs and
calculate a more exact annual cost per watt's worth of 24/7
consumption.

If you're running the NAS 24/7 and just using spin down power saving
to
minimise its running expenses, you can estimate just how much of a
saving this contributes by calculating the hours of spin down 'sleep'
time each drive enjoys per day. For example, a pair of drives allowed
to 'sleep' overnight may get anywhere from 8 to 16 hours of repose per
day, depending on how often you access the files on the NAS box and the
timeout period you've selected before the drives spin down.

For arguments sake, I'll assume an average of 12 hours per day of
spin
down sleep for both drives and an effective energy saving at the socket
of 10 watts each, 20 watts in total making for a saving of 240 watt
hours per day. this represents a total of 87.66 units of electrical
consumption saved over the year. Assuming 15p per unit, this would
represent £13.15 savings on the yearly electricity bill.

This doesn't strike me as a worthy enough saving to place the drives
under the additional thermal cycling stresses introduced by such a
power saving strategy. However,in the case of a four drive setup, the
savings would be double that and look a more attractive proposition (at
£26.30 a year). In my opinion, that's still not enough to justify such
a strategy but I'm not you and you may feel differently about the added
risk factor. Also, your usage pattern may allow for an even longer
(unbroken) 'sleep' period per day and your electricity costs may be
higher than the 'ball park' 15 pence a unit figure I trotted out.


More than happy to accept those figures. But how do you know this
'thermal cycling' is so damaging?


I know because, barring silly manufacturing defects or system design
errors that expose the silicon to electrical stresses beyond their design
limits, thermal expansion/contraction introduces mechanical cycling
fatigue induced stresses on the silicon die as well as in circuit board
through plated holes.

Apart from when the silicon is being run right on its upper temperature
limit (125 deg C) where today's modern silicon chips are prey to an
effect known as electro-migration(?), this thermal cycling effect is the
prime cause of post infant mortality failure in the HDD controller system.

Modern HDDs over at least the past decade subject the spindle motor and
its drive electronics to far less startup stress than the drives of old
which could subject the PSU, motor windings and electronics to as much as
4 to 5 times the on-speed current demand (which is why the spin up time
was only a matter of 3 or 4 seconds as opposed to the 10 to 12 seconds it
takes with a modern drive on account the startup current is limited to a
mere 1.5 to 2 times the on-speed current - kinder all round on both the
drive and the PSU).

The fact that the google stats showed only a weak correlation between
failure rates and temperature (other than for right up to the extreme
limit) on drives spinning 24/7 strongly suggests that it's thermal
cycling rather than absolute temperature that contributes to high failure
rates. The problem is, there doesn't seem to be any published test data
on the effects of such thermal cycling (at least not in the case of
commodity HDDs as used in desktop PCs).

Googling "effects of thermal cycling on silicon chips" throws up plenty
of research publications in this particular field which suggests that
such thermal cycling effects are an important consideration in the
service life of micro-electronic components.


I can give it a go:

http://www.springer.com/cda/content/...562-p173959749

So, for example, the author suggests a relationship between thermal
'experiences' and current. I couldn't possibly interpret those results
though. Mention of the solder type (lead is more affected - so older
disks? That data is probably from about 2009) heatsink temperatures (my
disks never experience higher than 30C - the author's paper *starts* at
40C, rising to 70C?!). While the pictures look drastic, and do suggest
cause - the statistics look lazy to me - but that's almost certainly
because they assume the reader has a high level of competence (not like
me!) and certain assumptions are industry sample (no stated error rates,
very odd sampling references). Things 'start to happen' at/around the
60,000 cycle state (maybe 30 years in my case).

So while (even) I can see something might be there, I have no idea how
that translates to my circumstances.

Sadly, googling "hdd spin down life rating figures" and variations of
this phrase in the hopes of being taken directly to a manufacturer's spec
sheet (or an article with such links) only produced discussions in
various web fora on the pros and cons of spin down power saving where the
only 'nuggets' were ill informed opinion best described as "Pearls of Wiz-
Dumb"


:-) I don't have the link any more, but I did read some really quite
convincing data from server farms. IIRC, though, those disks were 24/7,
and the finding pointed to configuration (3TB?) and brand as culprits. I
don't suppose we're ever going to get a decent domestic test - so we
tend to rely on anecdote/reviews


One way to minimise spin down cycles is to choose a long time out
period. When I toyed with spin down power saving I chose a 3 hour
timeout to 'sleep' on the basis that during the day the drives would
remain spun up and only after I taken to my bed would the drives
finally spin down for maybe as much as 8 hours worth of 'sleep',
effectively no worse than if they'd been used in a desktop PC being
power cycled once or twice a day without any spin down power saving to
stress them (or me) any further.

In my case, the savings on all four drives only amounted to some 28
watts and I soon decided the potential savings in my case weren't
enough to justify the extra stress of even an additional one or two
spin down cycles per day for the sake of letting them sleep for just 6
to 8 hours per night (I'm generally using the desktop PC for around 16
hours per day which is often left running 24/7). Assuming an average
'sleep' time per day of 8 hours this would represent a mere £12.27 a
year, assuming 15p per unit cost (I can't recall the actual unit cost
offhand).


Now, I have looked at that - and changed the spin-down triggers to 1
hour.


That seems a more reasonable compromise between MSFT's choice of 20
minutes and my own of 2 or 3 hours. The ideal to aim for is to set it so
it doesn't spin down (too often) during your daily sessions at the
computer but does spin down when you're safely tucked up in your bed.

When you mentioned a 4 hour per day figure of usage, it's not clear
whether this was a single 4 hour session or just an estimate over a
longer 8 to 16 hour period. If you were talking about single 4 hour
session, that one hour spin down time out should certainly do the trick.


Just guessing - they'd be in use for 4 hours over an 8 hour period. I
don't know what events cause them to wake. For example, about now
(breakfast time) when I've not accessed the NAS, chances are it'd be awake.


You have mentioned unofficial firmware patches in the past - and I'm not
too happy with that, must say.


I'm afraid you've lost me there. I've *never* recommended unofficial
firmware patches... *ever*! I've certainly recommended the use of Western
Digital's own officially sanctioned WDIDLE3 tool to increase the head
parking time out from its insanely short 8 second default to a more
useful 300 seconds value (and acknowledged the existence of *nix
equivilents used by the Linux and BSD fraternity). Perhaps it was my
mention of the *nix version of WDIDLE3 that you are referring to?


Ah yes - that was it. I remember seeing a post mentioning that soon
after I bought the current WD Red 3TB disks. I did look and found a
reference to the file on the WD site - but it was quite old, and listed
some quite old disks as compatible. So by 'unofficial' I meant not
sanctioned by the manufacturer for recent disks. But I didn't research
it much more than that.

Update - I see it's listed at current (albeit 12/2013):

http://supportdownloads.wdc.com/downloads.aspx?DL

So I may well give that a go, thanks.



Whatever the actual savings figure proved to be, it didn't strike me
as
enough justification to subject the drives to any spin down cycling at
all so I gave up on the idea of chasing after such savings, especially
as I was burning up some 70 odd quid's worth in electricty per year
just keeping my collection of UPSes powered up.

I was able to save 20 quid a year alone just by decommissioning a
SmartUPS700. Now, the only UPS maintenance loads I have are the
BackUPS500's 3 watts load (protecting the NAS box) and the 7 or 8 watts
of an ancient Emerson30 450VA rated UPS which sits in the basement
'protecting' the VM Superhub II cable modem/router with what I suspect
is a well cooked set of 7AH SLAs which wouldn't last 5 seconds should
the mains disappear unexpectedly (I really ought to check it out one of
these days).

Bearing in mind what I was already spending to protect against an
event
that last occurred over quarter of a century ago, you can well
understand my reluctance to increase the risk (even if only slight) of
premature disk failure for the sake of a saving that was a mere
fraction of what I was already squandering on UPS maintenance costs.

If you can optimise the spin down power saving time out period to
keep
the average spin up cycles per day below 5 or 6 (you can check this in
the SMART logs) and still accumulate enough spin down sleep hours to
make a worthwhile saving, then go for it otherwise you might be better
off avoiding spin down power saving altogether. It's hard to know where
the 'tipping point' between unwarranted risk and useful energy savings
lies with such a strategy.

My guess (and it's only a guess) would be no more than 5 or 6 a day
on
average. I think a close look at the more detailed specs on the hard
drive might offer up a clue in terms of the maximum spin down cycles
lifetime rating which the manufacturer may or may not have opted to
publish. If you can't find such a figure for the models of drives
you're actually using, you can always look for such a figure for *any*
model *or* make to get some idea of at least one manufacturer's take on
this particular aspect of drive reliability. I think I may even have
seen such a figure but I can't recall which brand or model or even what
the figure was - It would have held no interest for me seeing as how I
was avoiding spin down for reasons beyond the matter of mere power
saving.


Nothing of mine is that critical. In fact, I'm beginning to wonder if
90% of my data is actually required. Just photos and documents (which
are also cloud stored). Most of the rest (music and video) I could
download, or call on friends for their copies. I'd need a database,
obviously.

So while my reasons are not that thought through, the consequences of
total loss are not that serious.

I think what you're saying is that potential problems are easily
avoided, but I'm afraid I'm stuck thinking that the failure event is
statistically unlikely, and the energy/money saving is worthwhile.

Not knocking you - just saying!


Well, of course, only you know what is best for your particular
scenario. I was simply pointing out that such power saving strategies are
usually not in tune with a strategy based on reliability considerations.

I suppose, if the drives aren't running particularly hot and only go
through a modest number of spin up cycles per day, there probably isn't
very much in it (perhaps the difference between getting 4 years rather
than 5 years of service life which becomes a bit academic if you're
planning on replacing them every 2 or 3 years anyway).


16C ATM (and for the past 30 minutes - so I suppose that's fairly
typical for this time of year), maybe 4 cycles a day. Been running just
over a year.

As you've mentioned, reliability is very much a matter of statistical
probability. As long as you're prepared to deal with any sudden disk
failure, you're in the same boat as the rest of us. Unless your data
storage needs are quite modest, even the cheapest backup strategy
(another set of HDDs) is still a significant extra investment over and
above the bare NAS box on its own (and no, RAID is not (and never has
been) a substitute for a proper backup strategy).


Well, I'd like to to the best thing on the basis of the most accurate
information. 'Best' is a heady mix of hope, apathy, science and other
stuff. I'll look at the parking thing, thanks.


--
Cheers, Rob

Java Jive February 5th 16 09:16 AM

Couple of cd queries, model numbers later
 
On Fri, 05 Feb 2016 02:20:20 GMT, Johnny B Good
wrote:

On Wed, 03 Feb 2016 20:05:25 +0000, RJH wrote:

On 03/02/2016 04:47, Johnny B Good wrote:
On Sun, 24 Jan 2016 09:48:15 +0000, RJH wrote:

There's no hard and fast rule regarding the use of spin down power
saving in a SoHo or home NAS box but, unless you're really only making
infrequent use of the NAS, it's always best to avoid lots of spin up
events per day (most home desktop PCs are typically power cycled just
one or two times a day which keeps the spin up event count nice and
low, assuming that distraction known as spin down power saving in the
OS has been completely disabled in order to preserve the operator's
sanity).


I don't think it's asking too much of any user to wait for a HD in a
PC or NAS to spin up when it's not been accessed for a long time. One
just gets used to it.

It's worth keeping in mind that this is a *power saving* feature (in
reality, an energy consumption saving strategy) with no thought to
whatever consequences there might be in regard of the drive's
reliability. Seagate must be the only drive manufacturer stupid enough
to confuse power saving with temperature reduction if their FreeAgent
'specials' were anything to go by.


It is certainly true that saving power has to be considered along with
product life. The world is full of examples of electrical and
electronic products that are designed to run 24/7 - fridges and
routers, for example - and particularly with the latter switching
them off overnight may lead to premature failure, which, when the
economic, environmental, and energetic 'costs' of manufacture and
disposal of the products are considered, may be less economic and less
ecological, than just leaving them on 24/7 as they were designed to
run.

However, I suspect that is not true of HDs, which were designed to
spin up and spin down to save energy.

Spinning down a modern HDD typically reduces power consumption by
around
7 to 10 watts per drive as observed in the energy consumed at the mains
socket. Each watt year of energy consumed equates to about a quid's
worth on the annual electricity bill. That represents 8.766 KWH units
of electricity used per year. You can check your actual unit costs and
calculate a more exact annual cost per watt's worth of 24/7
consumption.

If you're running the NAS 24/7 and just using spin down power saving
to
minimise its running expenses, you can estimate just how much of a
saving this contributes by calculating the hours of spin down 'sleep'
time each drive enjoys per day. For example, a pair of drives allowed
to 'sleep' overnight may get anywhere from 8 to 16 hours of repose per
day, depending on how often you access the files on the NAS box and the
timeout period you've selected before the drives spin down.

For arguments sake, I'll assume an average of 12 hours per day of
spin
down sleep for both drives and an effective energy saving at the socket
of 10 watts each, 20 watts in total making for a saving of 240 watt
hours per day. this represents a total of 87.66 units of electrical
consumption saved over the year. Assuming 15p per unit, this would
represent £13.15 savings on the yearly electricity bill.

This doesn't strike me as a worthy enough saving to place the drives
under the additional thermal cycling stresses introduced by such a
power saving strategy.


[snip]

I know because, barring silly manufacturing defects or system design
errors that expose the silicon to electrical stresses beyond their design
limits, thermal expansion/contraction introduces mechanical cycling
fatigue induced stresses on the silicon die as well as in circuit board
through plated holes.

[snip more of same]


Frankly, IME this is ********. I cannot recall a single HD failure in
the electronic PCB, every single one I've ever owned has failed due to
bad sectors developing on the platters. How many drives have you had
fail in the way that you claim? I'd be surprised even at a single
one.

Sadly, googling "hdd spin down life rating figures" and variations of
this phrase in the hopes of being taken directly to a manufacturer's spec
sheet (or an article with such links) only produced discussions in
various web fora on the pros and cons of spin down power saving where the
only 'nuggets' were ill informed opinion best described as "Pearls of Wiz-
Dumb"


Quite, so why are you helping to create/perpetuating yet another urban
myth? The facts on this particular topic are that there are no facts,
so you have no business peddling one viewpoint over another,
particularly when you're going against most users' experience,
including, I would guess, even your own.

Now, I have looked at that - and changed the spin-down triggers to 1
hour.


That seems a more reasonable compromise between MSFT's choice of 20
minutes and my own of 2 or 3 hours.


My PC drives spin down after 5 minutes when running off mains power,
the laptops after 3 minutes when running off the battery. From memory
I think the NASs are the same as the PCs running off mains. I find
the resulting usability and reliability both perfectly acceptable.
--
================================================== ======
Please always reply to ng as the email in this post's
header does not exist. Or use a contact address at:
http://www.macfh.co.uk/JavaJive/JavaJive.html
http://www.macfh.co.uk/Macfarlane/Macfarlane.html


All times are GMT. The time now is 03:02 PM.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2025, Jelsoft Enterprises Ltd.
SEO by vBSEO 3.0.0
Copyright ©2004-2006 AudioBanter.co.uk