Let’s all take a moment and acknowledge something: The Global Positioning System is incredible and literally out of this world. It’s easy to take things like GPS for granted. You use it to find the nearest Starbucks, to find your friends, find your phone, get to a job, track a run after work (if you’re into that sort of thing) The simple innocent question of “where am I?” sounds more like a joke than a cry for help. We are never lost!
While it’s nice to appreciate what we have, there’s another reason to take a closer look at this system. If you use GPS to do things like mapping, site work, construction, farming or a host of other things you are staking your reputation and livelihood on it. So, let’s dive into some history and facts about good old GPS.
GPS pad placed at the lowest point of the survey
What we know as GPS was brought to birth in 1978 by the U.S. Department of Defense. It was not an easy pregnancy. Going from concept to reality involved observing the Russians, using some advanced science and implementing good old fashioned trial and error. Money also was involved… about $20 billion in todays money.
Not surprisingly, it was originally limited to military applications but in 1983 the gate for additional use swung open when an unsuspecting Boeing 747 was shot down. It had strayed into the USSRs prohibited airspace due to a navigational mistake. As a side point, the soviets denied blowing it out of the sky but later said that well, in fact they had.
Sometimes it takes a major disaster to get the ball rolling. In this case the decision was made to let this incredible technology off the leash and make it freely available to the public. To make a long story short, we all started using GPS and have never looked back.
The Global Positioning System is made up of a network of 24 to 30 satellites orbiting at an altitude of approximately 12, 550 miles above the ground in what is known as a medium earth orbit. The satellites in the GPS constellation are arranged so that users can view at least four satellites from virtually any point on the planet.
Without digging myself or you into a deep scientific hole here’s the nuts and bolts of the operation. It is a 3 part system made up of 1. Satellites 2. Ground Stations 3. Receivers. Satellites are like stars in a constellation. Ground stations monitor and control the satellites continuously knowing where they are and where they will be. Receivers, such as the one in your phone, are constantly listening for signals from these satellites.
Once your receiver calculates its distance from four or more satellites, it knows exactly where you are. With our phones the precision may be within a few yards or more from our exact location but it’s close enough for practical purposes. Not all receivers are created equal and for precision work like surveying or creating drone based maps, more can be done to dial in the accuracy.
A number of things can degrade GPS positioning accuracy. Buildings, hills, bridges, trees, atmospheric conditions, receiver design/ features and so on, can disrupt the signal and decrease accuracy. For these reasons, while the signal being broadcast remains the same, what the end user receives may not always be as precise as hoped. It’s a good reminder to mind your surroundings.
So where does that leave those of us who require accuracy in the millimeter range? GPS differential corrections are needed. What does that mean? Remember GPS is a 3 part system made up of Satellites, Ground Stations, and Receivers. The Ground Stations play an important role in providing corrections for data collected in the field. Because they are located at a known point, corrections can be applied in real time or during post processing.
As an example, for mapping applications we use GPS pads that collect data for an hour or more while we fly. Afterwards, that data has corrections applied to it with the end result being pinpoint accuracy. So exactly how accurate is GPS data? Much comes down to the gear you use and what you do with it.
Placing a GPS pad before a flight
It is easy to forget exactly how big this planet really is. Clearly much work has gone into the development of the Global Positioning System. Long before that, men and women did an enormous amount of leg work so that we could all know where we are. That’s another thing we can be grateful for.
So the next time you use Google Maps, or fire up any number of GPS receivers think about the huge task of developing and maintaining a system that we stake our reputation on. Think about everything that led up to that system. At the same time, don’t forget to unplug every now and then and experience what it’s like to be just a little bit lost.
The last 3 posts have dealt with the exposure triangle. If you missed them feel free to check those posts out first or you’ll get a basic rundown here. Keep this in mind: exposure, as with all things photography related, can be as complex a discussion as you want but, to have a workable understanding of it isn’t rocket science.
In a nutshell, it’s all about letting the right amount of light hit the sensor on your camera, and having it hit that sensor in the right way for your needs. Think of it this way, if you’ve ever let light into a room by adjusting blinds then you sir (or madam) have what it takes to nail proper exposure.
Aperture is like raising or lowering the blinds. You can lift those blinds all the way up, lower them all the way down, or just open them part way. When you twist the blinds open that’s like adjusting your shutter speed. And finally, if there’s just not enough natural light you can flip a switch and turn on some artificial light. Think of that as ISO.
It’s easy for you to tell if you’ve let enough light in the room. It may be a matter of personal choice, setting the mood, or doing something like reading which would require more light.
With photography some things are personal choice but the biggest question is what does the camera need in order to produce good quality images? And don’t be fooled by the way the picture looks on your display, it’s important to go by the numbers or histogram (the histogram is a topic for another time)
Before we go on let’s acknowledge that not all drones/ cameras give you the flexibility to use manual exposure to the full. For instance, the Mavic Pro is an awesome little drone but it has a fixed aperture meaning one part of the triangle just ‘is what it is’ leaving only shutter speed and ISO to tinker with.
At EPIC, one of the drones we use is an Inspire 2 with the X5s camera which does not have a fixed aperture. If you super geek out about mapping, yes the X5s has a rolling shutter which can be problematic without the proper settings.
The X4s is another nice camera that we use and has the advantage of a mechanical shutter. Both are good options, personally I like the quality of the X5s and the better ground sampling distance… but whatever you prefer.
So for the sake of demonstration, let’s say you’re wanting to use manual exposure to make an orthomosaic map. You know that to get crisp photos without motion blur you need a shutter speed set at around 1/1000. That means the other two parts of the triangle need to compensate for the faster shutter speed.
Start with the f-stop. Ideally, a bit of a higher f-stop will provide better results for mapping since more of the photo will be in focus with less hazing at the edges. At the same time it’s necessary to let more light in to allow for the higher shutter speed. So compromises have to be made based on the available light the day that you’re mapping.
The last part of the triangle is the ISO but as discussed in the previous article, you really want that number as low as possible to avoid graininess in the photos which in turn throws off accuracy. Some platforms like Propeller won’t even accept photos with an ISO over 400 so there’s not a lot of wiggle room.
Setting it at 100 or 200 is about ideal. If you leave ISO in automatic it will jump around and make it seem like you’re getting perfectly exposed photos when in reality many of them may be unusable. What if you’ve opened your aperture up all the way, gone as high as you reasonably can, and your light meter is still showing the exposure as below zero?
For one, you can check the exposure up at altitude. Often there is more available light being directed back at the camera and certain apps like Ground Station Pro let you adjust exposure on the fly (which I happen to like)
Secondly, if you need to choose between slightly below perfect or slightly above I’d rather the photo be slightly darker (the information is still there) as opposed to blowing out the exposure (the information is lost forever)
Third, if there still is not enough available light there is two choices: pick a better day, or lower the shutter speed a bit. Be careful with this one. You can get away with it but, the further the deviation from ideal, the bigger the chance you’re taking.
Proper exposure is a balancing act. Lean out to one side and weight needs to be added to the other side. If not you fall off or in this case end up with garbage photos.
So, why mess with it? Why not just leave the camera in automatic and worry about not crashing your drone? You can, just don’t expect accurate or consistent results. Using any technology requires that you be present mentally and know how to compensate for non-ideal circumstances.
With time and practice manually setting exposure becomes second nature and only takes a few seconds but it’s part of what separates the hobbyists from the professionals.
Take a look at the images bellow. They are all acceptable from an exposure standpoint. You can see that by looking at the light meter. However, after reading this series of articles which one would be the best choice for orthomosaic mapping?
International Organization for Standardization. If that seems random just keep reading!
The third part of our exposure discussion revolves around something called ISO. To understand that, let’s take a (very brief) trip to the past. In the “old days” you had two standards for film sensitivity: ASA and DIN. In 1974 those standards were combined by the International Organization of Standardization (see, it wasn’t so random after all) The light sensitivity of both film and digital cameras settings became known as simply “ISO”.
You know that thing that happens when you walk into a dark room after being out in the bright sun? At first, everything is super dark and it seems like there’s no way you could ever see anything. Then your eyes adjust and you become more sensitive to the available light.
ISO is a similar thing. It’s not changing the amount of available light but it is adjusting the sensitivity of the camera to that light. It does have limitations though. If you’ve ever been driving in the evening and its slowly getting darker you may not feel the need to turn on your lights because you think you can see just fine. In truth you’re probably missing a lot because the image quality that you are seeing isn’t all that hot. Similarly, as you crank up the ISO the picture may look bright enough, but it will start getting grainy and not being very crisp.
How does ISO relating to 3D mapping projects? Any amount of graininess in a photo is not a good thing for map quality. It’s not just a matter of resolution, it can affect your accuracy. For that reason ISO should always be as low as possible (preferably 100)
These three things (Aperture, shutter speed, ISO) make up what’s called an exposure triangle. You can’t change one thing without it affecting the other and everything needs to balance. How to preform that balancing act is something that we will discuss next time!
You’ll learn everything you need to know in the blink of an eye. Well, it’s going to take a bit longer than that but the mechanics of a blinking eye will help. Here’s some mildly interesting information. The average human eye blinks every 4 seconds or over 20,000 times per day. We do it mostly unconsciously but I dare you to stop thinking about blinking now!
When you snap a picture, it’s kind of like a reverse blink. Imagine that your eyelids are closed, you open them for a millisecond and close them. Whatever you saw is recorded as a memory. That’s what a camera is doing, and the amount of time the “eyelid” is open is referred to as shutter speed. There are different types of shutters (mechanical, global, rolling etc.) but we won’t take a deep dive into that just now.
The important thing to realize is that when you are out of automatic and into manual you have control over shutter speed. That means you need to make decisions about what speed you will select and that all comes down to this: what are the lighting conditions? What are you trying to achieve?
Slow shutter speeds capture more light but they will also tend to blur time. Think of a beautiful picture of a waterfall. If the water looks silky and wispy you know it was taken with a slow shutter speed. As a side note, you’re also going to need a tripod or something to set your camera on or the entire picture will be a blurred nightmare. On the other hand, a fast shutter speed is going to freeze time as it’s taking a tiny slice and recording that moment.
The amount of available light has an effect on your choices as well. Very little light means slow shutter speed. Lots of light means you can have a faster speed. If your thing is aerial mapping then a fast shutter speed is what you’re looking for. We’ve found that 1/1000 is the preferred speed for us. It’s kind of a sweet spot, much faster than that and you won’t be getting enough light hitting the sensor and the “memory” that’s recorded will be too faint.
So far we’ve discussed two aspects of exposure: 1. Aperture 2. Shutter speed. Can you see how the two work together? If you stop down your aperture you need to slow down your shutter speed. I you open up your aperture you need to turn up the shutter speed.
It’s a balancing act of light management. The third piece of the puzzle (or third side of the triangle) is what we’ll talk about next time.
It’s been said that a drone is little more than a flying camera. I’m inclined to agree with that. When you strip away all the tech (obstacle avoidance, GPS, Intelligent batteries, you name it) what you’re left with is a camera that you control remotely.
So it stands to reason that if you’re going to do a good job with that drone, you need to know something about photography on the ground. While cameras themselves have changed quite a bit, at the end of the day they are all doing the same thing: collecting light on a sensor. That being said, some of them do that much better than others and there’s a whole host of features that could be considered but we’re going to keep this simple. We’ll do that through 3 posts discussing 3 things: 1. Aperture, 2. Shutter speed, 3. ISO.
It’s true that most cameras will adjust all this “exposure triangle” automatically but, ask any photographer and they will tell you that to get consistent results, automatic mode is not where you want to live. You need to get comfortable with manual settings.
Aperture is defined as an opening or gap. The aperture on a camera is referring to the gap that lets in light. You could think of it like your eyelids. When it’s really bright out what do you do? You squint to cut down the glare, or in photography terms you stop down the amount of light coming in.
Now how does aperture affect things like mapping? Here’s an experiment. Take a book or some kind of paper with writing on it. Move it towards your face until your eyes can’t focus on the print anymore. Now create a pinhole with your fingers to look through. The tighter you stop that hole down, the more easily you can focus on the text. The less light, the less you need to focus. So one thing aperture can do for you in mapping is to keep all of your pictures in clearer focus which in turn means more accurate maps.
By the way, F-Stop is the name for how open or closed the aperture is. F/ 1.8 would be very open, f/ 8 would be much smaller and f/ 22 would be a very small opening (something you might do in very bright light).
F-Stop is one of the biggest contributors to depth of field. What is depth of field? It’s referring to how much of the photo is in focus. So if you were taking a portrait photograph of someone, you’re using a lens that lets you shoot at f/ 1.8, and you focus on their face what will happen to the background? It’s going to look blurry and you would be experiencing a “shallow depth of field”. That blur that you’re seeing is called bokeh and it really gives a 3 dimensional look to your photos.
Depth of field is a very important artistic consideration and can help you identify what was a professional photo and what was done by an amateur. If your priority is map making, then a shallow depth of field is not what you’re going for, instead, you want as much of the photo to be in focus as possible.
So much is dependent on available light, which makes sense when you consider that the word “photography” means “writing with light”. Now that we’ve talked about aperture enough for one day, I’ll let you start thinking about the topic for the next post: Shutter speed.
Most days you probably don’t think about infrared radiation that much, and that’s ok! It’s not harmful. Infrared radiation is best described as light that you can’t see but you can feel.
You feel it when walk outside on a sunny day. You feel it when you hold your hand over a pan that’s heating up on the stove. You feel it when you sit next to a campfire. However, you don’t see it.
Everyday you make decisions that are influenced by infrared radiation (such as, what clothes to wear) and you never even see it. So imagine what kind of decisions could be made if you actually could see infrared radiation.
To us that’s one of the coolest things about our job, getting to see infrared and helping our customers make informed decisions.
To say life has recently taken a turn for the strange wouldn’t even be a rough approximation of reality. When part of your preflight planning involves Clorox wipes, rubber gloves and a contingency plan in the event of a sneeze you know the world has slipped several degrees out of normal orbit.
In this business, we deal with a lot of high tech gadgets, cool software, and cutting edge innovations. And yet the most complicated thing on any job site is the human body. It’s humbling to realize that no matter what advances technology might make, just breathing the wrong air can change your life and the lives of those around you.
We took this picture at a job on March 16, 2020, before any shelter-in place orders were in effect (it was a different time back then), and I’m happy to report that the three of us are in good health. As a company we have made a variety of changes during this time including limiting the number of employees at a job site and working from home. Thankfully, most of the work really is in the post processing which is all done remotely.
The truth is, all of us make changes for the greater good. That includes wearing hard hats, safety glasses, high viz, gloves, boots and so on. We do it even if it’s not our personal preference. So at EPIC we’re continuing to work, but as always we’re going to do it safely. Hazards take on different shapes and sizes. We’re committed to mitigating those hazards and we applaud everyone who takes this crisis seriously by keeping their friends close, but not too close!
Less is more… sometimes. Creating orthomosaic maps requires balance. Balancing the size of the project, the amount of front lap, side lap, altitude, and time- both the time of the flight and the time for uploading and processing. So when planning a project it’s important to ask some questions.
What is the purpose of this map? Is it going to be used for measurements and planing or is it just going to be hung on the wall as an aerial picture? How much area needs to be mapped?Obviously the larger the map, the more time the project will take, and each picture taken is just going to be adding to the total file size that needs to be uploaded/ processed.
On the other hand if you have insufficient coverage you’ll have wasted your time and the customers time.
Your map can only be as accurate as your ground sampling distance (GSD). For us, we try to maintain a 1” per pixel resolution so depending on your camera the altitude you fly at may be higher or lower but basically it’s going to be a set variable. On Ground Station Pro you can see the approximate GSD when your setting up a flight.
Overlap is essential but again there’s a sweet spot between too much and too little. Too much and you start creating unnecessary amounts of data, too little and you risk losing data and having holes in your map.
It’s been said that photogrammetry is as much an art as it is science. So there’s no substitute for experience, making some mistakes and using good judgement. As consultants, we’re here to help with all your mapping needs and questions so feel free to contact us, but either way, remember that balancing the amount of area and data that you collect is a big piece of the photogrammetry puzzle.
We get asked that question a lot and I always find it a bit puzzling. The whole point of using drones for commercial purposes (as opposed to military purposes) isn’t to see haw far you can fly. It’s really about the vertical perspective. For that, you don’t need to go very high or very far.
Let’s say you have a mapping job of 500 acres. For a takeoff location I would look for the highest point and hopefully one that is fairly close to the center of the project. Your drone will never be very far away so you maintain a stronger signal and clear VLOS. You also will get the most working flight time out of each set of batteries since less time is spent flying to the far end of the property and back.
Same thing goes for inspections. Why make inspections from as far away as possible when simply being out of harms way allows you to keep a closer eye on the UAV and have the best video quality?
Here’s the main point: Looking for ways to keep your drone closer to home will be safer, more efficient, and allow for better video quality. To us, that’s way more important than setting distance records!
Less is more… sometimes. Creating orthomosaic maps requires balance. Balancing the size of the project, the amount of front lap, side lap, altitude, and time- both the time of the flight and the time for uploading and processing. So when planning a project it’s important to ask some questions.
Shooting video from a drone is awesome. While EPIC is mainly a inspection and mapping company we’re not afraid to get out there and make some marketing or training videos with a variety of equipment. To be honest, I find cinematography both fascinating and rewarding while at the same time being demoralizing and maddening.
There’s few things worse than thinking “I got the shot” only to find upon review that the footage had some horribly unnatural movement at a critical moment. Sometimes this has happened to me because my finger wasn’t quite positioned right on the stick and when I tried to adjust it… I ruined the shot.
So, what is a one simple step in the right direction? Of course, learning to be light on the controls is a good idea. You can also select the “tripod” intelligent flight mode which basically reduces the sensitivity of the controls, but one thing that can really smooth out your videos and boost your flying skills, is learning to fly in ATTI mode.
First we have to say: Don’t do anything you’re not comfortable with, don’t do this on a windy day, don’t start out near any obstructions, and flip the switch at your own risk.
Ok, so what is ATTI mode? ATTI mode disables the direct influence of satellite positioning. Think of it like this: Position mode is like riding a stationary bike, when you stop pedaling, the wheel stops moving. ATTI mode is like riding your bike outside, when you stop pedaling, it coasts but you can still hit the brakes and depending on the wind and ground slope you may find it easier or harder to stop. Riding inside is definitely safer, riding outside feels more natural.
Generally, if we’re making a video and someone says “that’s a nice drone shot”, as the camera operator, to some extent I feel like I’ve missed the mark. I’d rather you not even be thinking about the position of the camera. It’s like trying to blend in to a crowd, nothing makes you stand out quicker than awkward suspicious movements. And there’s where ATTI comes in, the more natural the movement, the less distracting and more opportunity your audience has to focus on the content, not how you got that particular shot.
There is of course many other things that can make a difference, not the least of which is good old practice. In some further posts we’ll address those but the fact that you read this far means you must care about your drone videoing skills and for that we say: Rock on! Stay safe, and keep pressing that record button!
Today I wanted to talk about some of the limitations of Infrared Thermography. We’ve been called out several times to locate water leaks for customers, sometimes its been a success, other times we find nothing conclusive. What makes the difference?
Imagine that you have very poor eyesight (if you do have very poor eyesight I’m sorry but you may understand this even better) a pair of corrective lenses can make all the difference when reading this page. But suppose we change the color of the letters in this article to white, will you still be able to read? Your glasses are great but they can’t help you see what isn’t there.
Infrared technology is like those glasses. It’s wonderful, it can open up a whole new world of possibilities, but it can’t help you see what isn’t there. What allows you to read this text is the contrast between the dark letters and the light page. If the letters and the page are the same color, there’s nothing to see, and if the temperature of what you’re looking for and what’s surrounding it is the same, you’re wasting your time. Even if the information is there it will be camouflaged.
At times the temperature of the water line we’re looking for and the surrounding ground is simply too close to provide any usable data. That is a limitation of infrared. It’s nobody’s fault, you simple can’t read what isn’t there.
Another issue can be when a line is too deep. In our most recent case the water line was 4’ bellow the surface and much of it had just been paved over. That was two infrared strikes against us! In that case the infrared radiation simply can’t get through in the same way that sound waves can’t make it through a sound proof wall.
In a different scenario, the line break was thought to be under a concrete slab which also had a thermal blanket between the ground and the concrete. Again, nothing to see here. The information may be there but it’s completely blocked. Like a person screaming at you from behind sound proof glass, you can tell somethings up but you can’t make out any words.
So there you have two basic reasons why an infrared inspection may not turn up anything. Infrared, like everything else, has it’s limitations, however, it also can see things that would be impossible to discern without it and it’s a powerful tool for seeing a whole different dimension of light. Like so many things in life, knowing your limitations is as important as knowing your strengths.
What was special about this job? It was scooted right up against an air force base making it a definite “no go” for drone pilots. But with some hoop jumping and proper forms nearly anything is possible and this was no exception.
Everything came together nicely. The FAA granted our authorization. The day before the flight ATC was contacted and given the details of our operation. We communicated with them again right before takeoff and right after the job was done so that everything was done safely and professionally.
Lessons learned: Sometimes being in the right place at the right time is all about hard work, being committed to safety, and following proper procedures.
Facility Management (FM) is rapidly evolving. The Internet of Things (IoT) is becoming the new reality and BIM (Building Information Modeling) is creating exciting possibilities for augmenting that reality. How can laser scanning help FM teams? Increasingly FMs are on the go and a major advantage of 3D technology is the ability to take your facility with you, improving communication and decision making. Bottom line: utilizing laser scanning makes INFORMATION more valuable than your LOCATION.
Which is better? The term “better” really implies that one of the two is inferior, but in truth, that’s not necessarily the case. Comparing the two technologies is like comparing dogs and cats, they’re two different species with different strengths and weaknesses, you can’t compare them, you can’t even know for sure which one would win in a fight!
So what are the differences between these two things? Photogrammetry is a passive form of collecting data, meaning, you’re not sending a signal you’re just picking up available light from the surface. Your accuracy is dependent on being at the appropriate altitude, with the appropriate camera and settings and collecting the data at the right time of day.
Further, you can utilize ground control points (either pre-existing ones or ones that you provide such as Aeropoints) to increase your global accuracy and provide ground truth information- checks and balances. The information you receive is in the form of pixels, and if you set up your altitude correctly, one pixel should equal about one inch of ground which is also known as the images Ground Sampling Distance.
Laser scanning is actively collecting information. As the mirror spins, the laser is being sent and bounced back at a mind numbing speed and those millions of points are being translated into 3D information. Just like a printed picture is made up of millions of little dots, the picture that the laser produces is made up of millions of tiny points of light. Accuracy can be as close as + or – 1 mm.
So what’s the bottom line? This is just a very brief, simplified look at these two options. They both are good. Laser scanning will tend to have a higher level of accuracy but photogrammetry will have a nicer “look” to it. Cost is also a consideration, and speed. Really though, they don’t need to be opposed to each other, in fact, Laser and Photo can be combined depending on the situation giving you the best of both worlds. At EPIC, we’re proud to have both technologies at our disposal!
You shouldn’t judge a book by its cover but when it comes to getting accurate IR temperature readings it’s all about what’s on the outside. So for instance, if you have a stainless steel pipe that is running at 100 °F and a flat painted pipe also running at 100 °F, which one will appear to have a higher temperature? If you said the painted pipe, you would be correct but you also might wonder why you were correct. There are several things that effect accuracy but we’ll just look at one: Surface Structure.
Let’s get one thing out of the way right off the bat; emissivity is the single most important factor for reading apparent temperature. So what is emissivity? Simply put, it’s the amount of energy emitted by an object. Some materials have a high emissivity, rough surfaces, painted surfaces, non-metals; these will all tend to emit a more accurate picture of their true temperature. You could say they’re an open book, what you see is what you get.
Now in contrast, glossy materials (like the stainless steel pipe) are not very good emitters because their too preoccupied with being good reflectors (no one can be good at everything) and therefore they may give you a better idea about the temperature of things around them than their own temperature.
What’s the main point? If you’re going to take a reading on a highly reflective surface you need to either compensate for that in your camera i.e. adjust the emissivity settings, or place something that is a good emitter on the surface and then take the reading. Electrical tape has an emissivity of about .95 which makes it a handy tool to keep around. Just slap it on the surface and you’re good to go. Another thing you can do is find an area with lots of corrosion which is also a good emitter and use it for your thermographic exploits. Just remember, with thermography, it’s what’s on the outside that really counts.
That’s all for today, now get out there and soak up some infrared radiation!
There’s no way around it, looking at the world through an infrared camera is cool, really cool. Even just looking at hot water coming out of a faucet feels like you’re watching molten lava pour into the sink and down the drain. Of course, what you’re seeing is just a representation of the infrared radiation and that representation can be dressed up in different color schemes depending on the situation and your preference.
While not all cameras have exactly the same options, here’s a brief rundown of three FLIR color palettes.
This is the original palette used with IR (Infrared) imaging systems and it’s still a favorite for many thermographers. The focus tends to be a little clearer and it’s great for seeing fine spatial detail. However, as you can see in the picture, it’s not as good for easily spotting small temperature differences.
Ironbow is a good, popular, all-around usage palette. It has a nice balance between spatial detail and thermal detail. Small temperatures will show up more readily and it has a nice intuitive appeal for non thermographers.
As its name suggests, the rainbow pallet introduces more colors into your picture. It’s great for showing maximum thermal contrast and highlighting potential problems for a customer.
So in the end, color palettes are a very useful tool. Use them in real time during inspections, or in post processing and when generating reports. Use a variety and choose the one that’s right for:
1. The type of inspection
2. Personal preference
3. The needs of your customer
A thermal anomaly is a fancy way of saying that there is a noticeable temperature difference. You may not know why, or even if it’s a problem, but you know there’s a difference. The problem is, sometimes things that seem to be anomalies are really just thermal reflections. You can think of it this way, if you were trying to shoot a video of a person looking in a mirror where would you stand? If you stand directly behind them your own reflection is going to end up in the picture, so you would need to stand to one side. Problem solved.
Something similar happens when looking at infrared radiation. Stand directly in front of what you’re scanning and your own thermal reflection will show up. Another big thing that can throw you off is the sun. In this video you can see two things that might seem to be anomalies but as the drone moves you’ll notice one “hot spot” moves with it. That’s the sun. The area that stays consistent as we move is the anomaly.
So it’s really a very basic concept. If you think you’ve found something out of the ordinary just move a little, take the reading again and see if your getting the same results. It’s also good to keep in mind that certain surfaces are going to be more reflective than others, meaning your not getting the full thermal story when you look at them, instead your seeing more reflected radiation than the object is actually emitting. Long story short: don’t be afraid to move your feet (or your drone) to get a more accurate reading and take into account the surface properties of whatever your looking at.