An Introduction to Digital Measurement

Digital measurement systems are the foundation of modern technology and have been at the forefront of the technology boom over the past century. This webinar will give a brief history of the evolution of measurement systems then dive into the nitty-gritty of: Digitizing Signals, Resolution and Quantization, and Proper Measurement Techniques with discussions of Aliasing and how to validate your measurements. This webinar is intended to provide a basic education for those with minimal background in digital measurements but also provide some deeper insights and practical tips for even the most advanced users in the field.


Topics:
Aerospace

Transcript

00:00:01 [Music] welcome to the second in our series of webinars presented by do a soft USA the theme for this webinar is an introduction to digital measurement it is intended to provide a very broad overview of where these devices originated and how they work with an insight into some of the technologies as well as covering some of the issues that

00:00:29 you may encounter and possibly subsequently address we begin with a little background some of the historical evolution of these devices then we will look at how the systems that we use today take the responses from our sensors and convert them into digital representations of what they're measuring the next subject addresses resolution as well as measurement range

00:00:51 relationships and the process of quantization and finally we will review a couple of the most common pitfalls and issues that are associated with using digital measurement systems but before I get started I would like to touch on the subject of acronyms unfortunately you're about to see quite a lot of them I always consider them to be a convenient way to save ink of course we're all

00:01:16 discouraged from printing anything unless it's absolutely necessary but still we advocate the use of acronyms in our speech paps it is so those of us who are frequently accused are speaking too quickly can enhance the speed yet further and I did do that on purpose as sad as it is for me to admit this but I do have a favorite unfortunately its users stop since the technology has all

00:01:39 but disappeared however for those of you are old enough to remember it is PCMCIA it's one of my favorites and not just because it's one of the longest but because it's almost poetic PCMCIA which actually stands for personal computer memory card international association but the reason why I am so fond of this acronym is because of a good friend of mine dr. Peter Blackmore who informed me

00:02:05 that it really stands for people cannot memorize computer industry acronyms now just a quick shout out further infamous acronym another friend share with me and that is one that I believe should be universally adopted TLA which of course stands for three-letter acronym so basically it's a three-letter acronym for a three-letter acronym to the history historically measurement

00:02:34 systems have been around for centuries in fact there is evidence of an ancient system in Syria that dates back over a thousand years BC that was used to meter water possibly because it was around the time of the last significant climate change event and water may have been scarce the rather odd looking device on the left is actually a seismometer dating from AD 123 and it is claimed

00:02:56 that it was so sensitive it could detect earthquakes from hundreds of miles away the other images are of edison amir heads devices the phonograph and the telegraph recorder now I must confess I added the squiggly lines on the Telegraph recorder for a bit of fun it really kind of printed Morse if anything the one thing that all of these have in common is that they all have storage

00:03:19 media be it sand or liquid into frogs or scribes into wax or holes into paper just to find a little comment it's kind of funny I bet nobody ever heard of a solid-state frog storage media before before we went digital and we were transitioning from paper and wax media we use tapes ferrous oxide to be specific fundamentally the same stuff that was used in VHS Betamax and audio

00:03:48 cassettes if anyone remembers them and somewhat ironically magnetic tape theoretically could have a resolution and we will cover resolution a bit later on and we could have a resolution that significantly exceeds that of any digital system today however they simply wasn't and possibly never will be the technology to read them right to that level besides at some point someone had

00:04:10 to be able to use that data for analysis which meant it either had to be digitized or transferred to some sort of media anyway my story regarding their marries when I first got involved in deck I visited an early adopter of digital technology and they have one of the first digital systems in the US when I arrived at their facility I was horrified to see the digital system

00:04:31 bolted to a bench and hooked up to tape-based DAC they were still using the tape systems for their tests because they were afraid to go completely to the digital system and were using the digital systems to digitize their tape data I guess prior to that they used thermal paper digitizers for their tapes now my state record is a measurement use

00:04:52 reel-to-reel but later some even used cartridges very similar to the old video cassettes and they see me just to bring a margin of reality to the way it was there are some of us that can remember using paper rolls or tape to tape but I doubt there are many left that remember doing their analysis like they're going the image I doubt there are many folks they left they even know how to use a

00:05:13 slide rule one important point to make is in the image the storage media was his head in the paper and just a quick question to everybody would anybody like to generate an fft using that method hmm it's worth having a little insulated a chronology of digital processing if for no other reason than to appreciate the huge leaps that have happened in just the past 20 years

00:05:36 which is ironic as some folks are still using dec systems that were designed over 30 years ago now as you can see it really started in the 50s with Gordon Kidd tear led Texas Instruments then between Jack Hill was first IC and Muhammad are tellers passivation process it went in huge steps and today we of course all know about the multi-core processors and

00:05:56 probably have a monster machine at home for our games the image in the lower right may be of interest as it was actually a nice set and early IC that was used in the Apollo space missions and it was actually part of their going in the systems but it isn't until you draw grass the illustrating incredible progress in processing speeds that you really appreciate how things have

00:06:16 advanced from about 0.6 for MIPS in the late 50s to 22800 MIPS in 2005 and keep in mind that a myth is a million instructions per second and viewing this chart will hopefully bring into perspective the comments I make about Dec systems that were designed 30 years ago illustrates to some degree why they were somewhat limited now how does that look from a

00:06:44 measurement point of view well the paper disk was introduced in the 50s and honestly it's still being used today low sample rate applications like thermal monitoring and power generation systems etc still use them and they still work very well but the first commercial truly digital data acquisition system the data might appeared in the very early 80s which was initially a single channel and

00:07:07 I want to use the words wonder machine and not least of all because it could actually process and store rainflow counted data then in the early 80s came the very first high channel count truly digital multi-channel integrated deck the mega deck the first ones actually had fiber core inch floppy disk drives many later on it went to solid stay and optical drives but how many people

00:07:29 actually remember the five and a quarter inch floppy drives again in the early 80s the brainchild of Professor Norman Miller and professor darrall soci of the University of Illinois the site mat 2000-series filled computer arrived which was initially a single channel per layer and to get things into perspective a 256 kilobyte memory module was about $4,000 forget the price for a moment

00:07:55 because you have potentially a hundred and thirty thousand times as much memory in an iphone with 32 gigabytes of RAM so just take a moment to imagine how much you would have paid for that iPhone if memory prices hadn't gone down then in the early 90s Along Came the same eight edict which was one of the most successful dec systems in the world and was incredibly advanced for its time of

00:08:20 course that was over 30 years ago and finally to the duysts off Sirius which began life in the mid 2010 and combined not only all of the advances in technology but introduced some unique technologies such as parallel or dual core ADC application and we will cover that technology a bit later on and just to quantify here there is 34 billion three hundred and fifty nine million

00:08:44 seven hundred and twenty thousand seven hundred and seventy-five bytes in 32 gigabytes compared to the 256 thousand bytes in that form in that so $4,000 memory module so I wanted to end this section on background with a little added getting things into perspective at least to show more realistic size comparison for these devices because one thing that

00:09:10 has happened with many of these advances as with much innovation is the reduction in an overall size the volume of a full-sized mega deck is enough to fit about 20 serious modules in and that is a potential of the total of about 320 channels in the same space as 64 then and to end let's not forget that size was probably not even a thought not so long ago now let's have a look at how

00:09:38 the signals get handled inside the systems I won't read this entire text but what we will do is we'll introduce a couple of acronyms DSP digital signal processor an FPGA fill programmable gate array now you've probably heard of these anyway all we need to say is that these are these devices are incredibly powerful if they're in the right hands and I say that because a DSP is

00:10:03 basically like a massive library all you have to do is well write all the books and then lay them out in logical efficient animal way and don't forget to keep all the volumes together and well actually to contradict myself a little bit some of the books may have already been written for example a DSP often contains some predefined functions a common one is the finite impulse

00:10:25 response filler but of course you still need to know where it is and wait for it using a book analogy be able to read and take advantage of their information so using the library analogy this is really just to illustrate how vast that library could be imagine each of these blocks is a technical section in your library and each of the smaller blocks is a series of bookshelves on each of which there

00:10:48 are a complete set of volumes then you need to traverse your way back and forth as you read each volume at the reading desk after you've checked the reference menu was in a different section and coincidentally you need to do that about one-sixth the speed of light so a little background on DSPs and FPGAs as mentioned before I said you know that things have been

00:11:11 shrinking so this has seen obviously this time that technology is shrunk as well and a DSP is about 1/10 the size it was thirty odd years ago the other thing is that these are really specialized devices and only a handful of people even make them ti AD my roller and so on the thing that a DSP does is really simplified by some folks into two categories multiply and accumulate the

00:11:35 Mac in DSP means multiplying accumulate and because of this it has to be able to move things around effectively from this buffer to that buffer and so memory had to be fast enough to handle it oops sorry wrong kind of RAM it's one of the many reasons that memory has gotten really fast over the years it's the demands from the DSP processors and GPUs for example in fact memory now is faster

00:11:58 than ever and often needs its own dedicated cooling systems the image in the lower center is actually a description of the DSP FPGA implementation on one of the many interface fixtures at the CERN accelerator in Switzerland as far as ADC technology is concerned there are really only a handful of types you may have heard of Sigma Delta sometimes called

00:12:20 Delta Sigma and all the SI are successive approximation register but there are more the os-- are the CADC or capacity of ADC but they rarely figure in our world so let's take a very brief look at the two most common the Sigma Delta is the mainstay it is proven and has probably the most airtime in our industry it is really a single bit that is overclocked although you shouldn't

00:12:43 relate that phrase to what you may have done to your home games computer it has almost nothing in common the overclock is really the sample clock it is the bit that tells the ADC when to take a sample in a Sigma Delta is typically at least twice and that's good old Nyquist there what you really want and often times it can be many tens or even thousands of times faster than what

00:13:05 you actually see being available to you for recording now the nice thing about the Sigma door is that it actually characterizes every single signal individually that is to say it has a shaping function built-in that means as long as you are softly fast enough it is going to represent your signal exceptionally well now the ser is as different as night is

00:13:25 from day it is really akin to a state machine it looks at the incoming signal and basically matches it to a binary representation the good thing about that is it doesn't try and shape it now if that sounded a little contradictory to what the Sigma donor is doing well frankly it is a nice because SAR and sigma deltas are better suited to specific test applications but how do

00:13:48 you know which one is the most appropriate well you don't have to lay it wrong we'll cover sample rate issues and you can see where this is going and we will also be having an ADC webinar later that you are more than welcome to sit in on and we will go through the pros and cons at that time but do keep in mind that some advanced systems can be either or even both of these so let's

00:14:12 have a quick review of how your signal got placed onto your display and x3 for example so we begin with our analog signal and this gets passed into what some folks call the first stage which may comprise of an anti-aliasing filter which is usually a low-pass such as a Butterworth electrical filter and then an initial gain the outcome of that first game is then subjected to a

00:14:33 secondary filter that you the user would typically define this could be almost anything depending on your system and its capabilities that is then corrected for any offset as you don't want to sacrifice the side of your signal because the buyers prevented your second gain from being set to a higher level because it would have clipped the signal the second stage of gain is then applied

00:14:53 now this is mainly because most ADCs have specific input knowledge ranges such as 2 4 5 and 10 you really don't want to put a micro volt signal into a device that potentially wants 10 volts that would produce a limited number of bits and we'll cover that in the resolution section then we get to the all-important ADC where the binary is basically created and finally we get to

00:15:18 the DSP in the FPGA where all the real computing takes place and ultimately you get your squiggly lines displayed of course if you happen to have the dual core well that's just one of the cause the different systems have slightly different approaches but the to a conclusion remains the same to get your initial signal into an accurate representative digital equivalent that

00:15:41 you can see and work with some systems will offer an analogue output these have almost the opposite of the inbound signal train with a DAC digital to analog converter then an offset correction and possibly amplifiers as well now it's probably worth mentioning that this is considered to some degree to be old technology as modern systems have added digital lines such as ether

00:16:03 care they're oftentimes beat the latency of the original analog conversion as well so effectively removing all of the analog cables and substituting them with a single communication line to the controlling system so with the test involving say 96 channels where you had had to have 96 cables carefully identified them route into your controller system you now have one now

00:16:31 this is a very brief mention about both dual core and bit resolution references that people use we've probably all heard of SNR signal to noise ratio DMR dynamic measurement ranges seer mark on the mode rejection more acronyms and of course bit resolutions and references that folks make to that well just for reference in case you didn't already know a 16 bit ADC in a perfect world

00:16:53 would be about 96 DB a 24 bit 144 DB a simple rule of thumb is to simply multiply the number of bits by six to get rough numbers but you may not hear that you may not hear that the reality is that no 16-bit system ever achieved 96 DB for its measurement range they generally do well if they can achieve around 80 DB a 24-bit is doing well if it can achieve around 115 DB but using a

00:17:23 dual core you really can achieve about 160 DB and that's the equivalent of about 27 bits ironically most so-called 24-bit DC's are really only 19 bit in the first place now I could talk all day about this subject but again we're going to have a dedicated women are so keep your eyes and ears open for that so before we end this section I'd like to make a

00:17:47 quick mention of data integrity rich really is a very broad subject again something we will cover in more detail in our future webinar but I should at least mention it before we continue because some systems approach things in a slightly different way to others think back to a comment I made earlier about the price of memory for a system from

00:18:05 the 80s I bring that up because memory was a premium component in earlier systems and played a big part in how signals were processed to take advantage of limited availability of space nowadays we have the ability to store vast amounts of information and the speed and power to process it far more efficiently so the approach of some devices is store it all then reduce it

00:18:29 with all the assurances of having the original data and that you're unlikely to miss anything and that way you don't lose anything and I mentioned this because if you have a serious ax krypton an r8 and so on that's exactly what it's doing and you may not have even known it okay the next section will look at the process of digitizing your signals the resolution and the implications

00:18:53 associated with the conversion now we all have computers these days some of you may be using 64-bit versions of Windows and this is really meant for fun but some may still be using computers that run dos remember that disk operating system yeah I doubt it but you know you never know anyway systems have evolved with the technology in the systems that were

00:19:18 produced in the 80s and 90s were predominantly a or 12-bit systems which meant that they offered 256 or 4096 discrete levels into which your signal quantities could be placed the use of discrete placements is really what the word quantization means and a ship had said that this is okay the resolution issue really only becomes a problem if you have low-level signals

00:19:41 that possibly need to be characterizing well as you can see lose some of their fidelity because there simply wasn't enough space to quantize them into to get the equivalents and the limitations into perspective you can see the equivalent quantization of a 4000 micro stream range signal equated to almost 20 micro strain per bit for the 8-bit systems and rapidly gets close to

00:20:07 what I call ridiculous as you increase the number of segments available if all you're interested in was the high amplitude reversal was for example for your feet fatigue prediction this probably will never ever be even entered into your thought process but keep in mind that you really don't get all the ranges we alluded to earlier now you may have noticed on the previous screen

00:20:30 there were actually two ranges one that said equivalent and one that said reality these are based on the assumption that although we have all probably clipped data in the past chances are that we didn't do it on purpose so we always try to include a safety margin in this example of the safety margin is 10% of the entire range so in a 24 bit system you will have the

00:20:51 equivalent of 15 million discrete levels to quantize your data into if we now take a look at the comparison of the inputs high and low level signals and in this example we've used a couple of options available on the STG as a reference we have illustrated the equivalent of an input of plus or minus 10 volts and plus a minus 1 millivolt and can see the perfect world quantized

00:21:15 equivalents now you'll probably agree that the outcomes border on on my territory in addition if you have the dual core technology you could be getting almost this for each in the same signal now on the flip side as seen before when you try and characterize a signal you probably want as much information as you can get imagine a signal that looks like this

00:21:41 that is quantized across only three bits of your range and you could probably see where this is or where this is going already what was potentially let's say a loading reversal now appears as a plateau a perpetuation of the loading condition and of course we really don't know what the peak amplitude was we only know it was somewhere in their 8th bit knowing that each of the quantized

00:22:04 levels could spend hundreds of micro strains based on our previous Harrisons and by the way this is what people refer to as the quantization error again with the addition of the second core you effectively retain both the high and the lower level data but with the single core you may have to sacrifice your low-level quality and if you look at the screen you'll see what

00:22:32 the data really would have looked like if you'd have been out of quantized to a greater degree of accuracy okay enough about resolution quantization and bit issues let's take a look at some of the most common yet to a large degree a large degree avoidable issues and pitfalls now there are some things that simply can't be avoided for example if you're testing is in the desert there is

00:22:55 a good chance that thermal effects are going to be seen virtually everywhere so if you imagine your real signal look like this based on some dynamic event but there was also an ambient change in temperature during the course of your test it could have resulted in a DC shift that looks something like this now keep in mind that this is potentially everywhere where your sensors are where

00:23:18 your cables are being routed as well as where the actual measurement system is now there are a number of approaches you can adopt to at least mitigate these effects one is of course to insulate everything from an thermal changes and that is of course totally impractical for the vast majority of testing or you could simply measure the ambient changes and make an educator sessemann of their

00:23:37 influences and basically remove them and or you could take advantage of the tools at your disposal for example if you have sense lines available you could use them to compensate for thermally induced resistance changes in your sensor cables and of course do everything you can to minimize the thermal changes at your actual measurement device just being cognizant of these influences on your

00:24:01 measurement is the really important thing to take away from this try and avoid the Rumsfeld effect and I that's kind of disrespectful because frankly what he said actually kind of makes sense I think and don't worry I plan to talk extensively about this table it's just something that I put together for the US Air Force because they were measuring

00:24:23 components mounted on aircraft that were being subjected to what can only be described as massive changes in temperature in extremely short periods of time and then they were parked in very hot environments then cruising to extreme altitudes therefore extremely cold temperatures in a very brief period of time the one highlight section really puts the purpose of sharing this into

00:24:46 context since it's something that can happen frequently I've even seen it here in the US and I live in Michigan the example is a 50 degree fahrenheit temperature change but it could amount to a 350 microstrain offset assuming your cables were 35 foot long let's cover something that being an old guy I am particularly passionate about mainly because I'm old enough to

00:25:10 have been in tests during the evolution of these technologies and that is of course aliasing now if probably all heard of this we may even have been the victim of it the fact is there ironically quantization is effectively out of alias in your Dana log data into digital equivalents in the first place but in these examples we can see that there is enough resolution to capture

00:25:31 all or at least enough of what we want you'll probably know is that the sum of your total shape is easily discernible in all three of these in fact if we were interested in frequency we haven't missed a thing if the window is one second we can see three Hertz across all of them the arrows at the bottom indicate the sample points now if we slow it down a bit to a large degree we

00:25:54 can still see the frequency in the upper image and to a lesser degree in the center image we can see the amplitude in the upper image to a certain degree and we can see most of the amplitude if we were looking for peak excursions then we certainly would have got it statistically in the even the second one but in the third one as you can see it's fundamentally a total loss so this

00:26:19 screen is really just to illustrate that quantization of the signal it really just illustrates what's going on under the hood so what you've got here is you've got an inbound signal that's been sampled it being quantized and then it's being stored so this is really just to illustrate what's going on we this is part of what you will never see because

00:26:38 you can't get down to the microscopic level and look at the electron that's flowing around signal aliasing of course the most common human exposure to alien is a very own AC system it's actually cycling at 50 to 60 Hertz typically depending on where you are in the world we of course don't see it because we actually only process images at 20 to 30 frames per second

00:27:02 you've probably heard stories of people seeing things in slow motion if they're involved in an accident and this is most likely because our bodies create high levels of adrenaline and the entire processing portion is accelerated it's the kind of fight-or-flight the animal protection system in us kicking in now the next most common form is visual aliasing and the movies used to use 25

00:27:23 frames per second because it's sort of matched up processor speed and was enough to make the actually static images appear to move in fact if you look at some of the old original Disney movies a cunning method of only animating every other image was used to take advantage of our processing speed and actually made the animations appear much smoother now you'd have to dig

00:27:45 around to find some of the original movies and a lot of the digitized ones have obviously compensated for that now the animation you're looking out on the screen was created using a CAD application and you will have to take my word for it that there's no trickery at play here the handle is fixed to the sprocket on the left and is obviously rotating in a counterclockwise direction

00:28:03 now unfortunately because PowerPoint cannot handle the framerate very well it's actually much slower than it really is note that the rotation rate increases then decreases but keep the handle in mind as a reference to the sprocket because what we got to do now is we're going to use 50% less frames and it actually appears to be fine now that's

00:28:25 partially because funnily enough PowerPoint can handle this but we are now going to reduce it by 50% again and I'd like you to pay careful attention to the handle so you will see that the handle is still rotating as it was but if you watch you will see that the direction of the sprockets appears to change so watch carefully where the sprockets are

00:28:49 meshing and make a note of what you see now that is something that is virtually transparent to us we it's almost impossible for us to discern the point at which the rotational direction changed and this is a book about 25% of the actual rate now we're going to reduce it by 50% again and you can see once again the handle is rotating in the counterclockwise direction and it's

00:29:16 still come very clearly definable as rotating that way but the sprockets at this point appear to be kind of rocking a little bit they're not really rotating and if we go yet further it now all becomes completely impossible to tell that those sprockets are rotating this is visual aliasing now just to reinforce this I found this video which I thought was a particularly good example because

00:29:40 not only is it funny but well and the fact that somebody went to the the trouble of building a will with sneakers which is incredible but what it really shows is those sneakers almost going stationary on the ground at times because the frame rate isn't fast enough to capture all of the images not that we would be able to see it in this anyway ok

00:30:03 to illustrate the effect in our world I've got four sine waves one at 15 one at 51 at 101 or 200 and I'm gonna combine those into one and the quick challenge is they compose themselves in the composite which frequencies do you think you can see in the composite now most people quickly see the high frequency some see the lower frequencies not many see the lowest frequency it's

00:30:26 like those magic eye pictures you may have seen some people see it some people don't I have the benefit of being an old guy and losing my eyesight so I just say to people oh I need new glasses anyway we're gonna use this day or in just a moment but before we do I just want to go through this we use tools such as discrete Fourier transforms or FFT shall

00:30:45 we say to expect frequencies from spectral dia now they're often termed as fast Fourier transforms transforms because they're fast now that's not really the reason why they called fast anyway IFT and an fft are basically the same thing anyway they both trigonometric series that represent the frequencies from some spectral data now I created

00:31:05 this animation just to illustrate what the Fourier transform looks like if you reverse what we did on the previous screen a sort of under the hood representation and there is of course a few more sine waves in this example and there was on the composite we just created this was intended just to be a tad more representative of spectral data rather than clearly definable sine waves

00:31:25 but if you've ever wondered what it looks like behind their FFT if you grab your laptop and turn it sideways you still won't be able to see this okay so here's the combines taking away in a subsequent Fourier transform and what we're going to do here is we're going to drop each of those in just to illustrate where they where they sit within that Fourier transform window ironically you

00:31:48 may see some visual aliasing as they drop into that screen so there's our frequencies now herein lies the problem as you saw with the sprockets at times they appear to change direction and others almost stopped rotating and sort of rocked now if we take that combined set of sine waves and sample2 slow we actually see the same thing in our data so here we are sampling a wee bit too

00:32:21 slowly and then at the bottom of the screen you can see we've got a frequency that was not one of the ones we were looking at originally and there is the problem so how do we avoid that sort of thing well we can use a thing called at erm shall we say it's really just to so that we can understand as much as we can about what we're measuring after all you don't use a six-inch rule to measure a

00:32:45 30-foot piece of timber well maybe you do if you drink too much of the weekend but that's another story so rise time becomes an incredibly important component in making certain determinations but I hear you ask how can we let me show you if you have available to you a system and some nice easy intuitive software you can make much of these determinations in

00:33:06 literally a few moments and this could save you some concerns later so this is worth doing from time to time as a matter of course anyway and these are just suggestions by the way and I emphasize the word suggesting I'm not advocating that you delay a test so you can confirm what someone said was right so we're wrong so this is just a suggestion okay what I've done in this

00:33:29 suggestion is all I've done is set up a very simple single channel with a user input to control when I store my data keeping in mind that if you want to see it as much as possible you should set your sample rate as high as possible now that's assuming that your sense there's capabilities to go to those ranges and don't forget that some sensors are only ready to specific frequencies there's a

00:33:51 little point in trying to sample something at 200 kilohertz if it's ready to 20 kilohertz and as far as the day is concerned that where you can always delete it later and it shouldn't be that much anyway now what I've done is I set the triggers to be my user input and since I'm confident I can get enough they are in a relatively short duration test I've said it to capture only

00:34:11 ten-second blocks and you might also know it's early set to do it and set it to do a multi file set again just to keep things in order and then all you do is run a few quick tests to gather some data and in this series I'm going to start with a static data in essence the test object completely untouched and unaffected by anything other than the natural surroundings and you could even

00:34:35 do this with the excitation turned off that way you even remove possible effects from the power coming from the measurement system or possible ground loops and you may want to do this just to become familiar with what does happen when you electrically excite your sensors or not this step to I'm going to use a hammer or some other mechanism to provide a sort of control wave through

00:34:56 my test specimen we want to see what frequencies are involved in that propagation of the wave as it passes through the specimen again you could repeat this step across connected components to gather information about the coherence between them it could also demonstrate whether we're dampening might improve the behavior of a test object but we'll cover all of that in a

00:35:15 seminar modal analysis later now we repeat the test but this time we're going to mechanically excite our specimen so if this was a vehicle this would be a short drive for example you may need to repeat this under different conditions different surfaces different tracks perhaps to characterize your test object but again is something we'll cover on a

00:35:35 later webinar and now that we have our day we can perform some very quick FFTs on the test data firstly the static then the mechanical sorry not the mechanical the modal my apologies or the pseudo modal the one that I excited by tapping it with a mur and then finally the mechanically excited data and that with the mechanically excited day or if it's a vehicle for example you might want to

00:36:02 do some coast tests as well by which I mean turn the engine off if it's an internal combustion engine and coast and gather data while it's coasting so that you take out the effects that are being transferred from that internal combustion engine and then finally we view all the results together and we look for something common large amplitude excursions and of course

00:36:22 something we think unexpected from this we can quickly determine where everything is happening in the frequency domain and whether we have any issues involving noise for example emanated noise radiated noise a/c interference and so on and from that we can make determinations of how best to sell our sample rates and filters to capture the bit that we're interested in now there

00:36:44 are a couple of simple rules to apply to these if you're looking for frequency information you've probably heard this before you should apply the Nyquist principle and at least double the frequency you're looking for if the amplitude of characteristics or even importance for example because you're planning a structural fatigue test then use the 10 F or 10 times principle and

00:37:06 factor your sample rate by 10 some folks prefer to be safe than sorry and go as high as 20 even higher to capture what they need and remember the greater the frequency the better characterization of the input signals now I've heard many times that something is sampled at that rave because that is how we do it while there is in principle nothing wrong with that statement because somebody probably

00:37:31 did these sorts of tests to get that sample rate or they knew something about the structure or the Tyrael but don't forget that they may have been using some systems with limited capabilities and with the power that's available to you today you may be able to reinforce their determinations or and this is tongue-in-cheek heaven forbid challenge their determinations

00:37:53 and I won't point out an important statement on this screen always remember that you can take away but you can't and back in so you can always take away but you can't add back in regardless of whether some software suggests that you can it typically uses something like wave characterizations to make its assumptions and frankly the use of the word assumption is always a kind of

00:38:19 no-go for me we all know the breakdown of the word assumed and it will make out you know well you probably know where I was going with that anyway so more is better and remember memory doesn't cost $4,000 for 256 kilobytes anymore it doesn't mean you have to collect all of your day or a 200 kilohertz it means you could collect you could collect all of your data at 200 kilohertz or whatever

00:38:46 your system limit is so in conclusion although this webinar could have gone on for days we want to summarize this entire subject in one phrase the key is simply to appreciate what you use to measure explore its potentials and have a good understanding of the hows and whys and possibly push the limits to appreciate why it's done the way it is again after all you have the tools

00:39:10 available to you and if you don't talk to your local do a self represent two different it'll help you get the best tools for the job [Music]