## Android: Gyroscope Fusion

- Details
- Written by Super User

# Android: Gyroscope Fusion

Discusses fusing a gyroscope sensor with a magnetic and acceleration sensor via a complementary filter to produce rotation estimations.

Author: Kaleb Kircher

Version: 2.0

04.11.14.2.0

Refactoring History

v1.0 | 12.6.13.1.0 | Authored |

v2.0 | 04.11.14.2.0 | Refractored |

Copyright: 2012-2015 Kircher Engineering, LLC

## Associated Code Projects:

### Gyroscope Explorer:

Gyroscope Explorer is a code example and working application that attempts to address the unreliable drift of the Sensor.TYPE_GYROSCOPE by using a sensor fusion. Gyroscope Explorer contains Android classes that demonstrate a fusion of the gyroscope, acceleration and magnetic sensors via complementary filter to produce an estimation of the devices rotation angles. It also demonstrates how to use the Sensor.TYPE_GYROSCOPE for a comparison. This includes integrating the sensor outputs over time to describe the devices change in angles, initializing the rotation matrix, concatenation of the new rotation matrix with the initial rotation matrix and providing an orientation for the concatenated rotation matrix in addition to the complementary filter. Fused Gyroscope Explorer provides the Earth frame orientation with the azimuth, pitch and roll and described in a clean graphical view which makes it easy to compare the performance of the fused and un-fused rotation estimations.

Gyroscope Explorer is available for free in the Google Play Store!

You can fork the Gyroscope Explorer source code from GitHub!

## How Gyroscopes Work

Gyroscopes found on Android devices are almost always vibrational and measure the rotation of a device with a pair of vibrating arms that take advantage of what is known as the Coriolis effect, which is caused by the Earth's rotation. By measuring changes in the direction of the vibrating arms caused by a rotation and the Coriolis effect, an estimation of the rotation can be produced. The gyroscope is one of three sensors that are always hardware based (the other two are the magnetic and the acceleration sensors) on Android devices. In conjunction with the acceleration sensor, the gyroscope can be used to create other sensors like gravity, linear acceleration or rotation sensors. These sensors are all useful for detecting the movement of the device, which can either be a result of the user inputs (moving the device to control a character on a game) or an external physical environment (like the movement of a car). They can also be used indirectly to determine the position of a device, like tilt-compensation on the magnetic sensor for a compass.

## Gyroscope Drift

Like all sensors, a gyroscope is not perfect and has small errors in each measurement. Since the measurements from a gyroscope are integrated over time, these small errors start to add up and result in what is known a drift. Over time, the results of the integration can become unreliable and some form of compensation is required to help compensate for the drift. This requires another sensor to provide a second measurement of the devices orientation that can then be used to augment the gyroscopes integration back towards the actual rotation of the device. This second sensor is usually a acceleration or magnetic sensor, or sometimes both. A weighted average, Kalman filter or complementary filter are common implementations of fusing other sensors to the gyroscope sensor, each with their own advantages and disadvantages. When you really get down into the implementations, you also run into real limitations with the "support" sensors as well. For instance, an acceleration sensor cannot determine the difference between the tilt of the device and linear acceleration, which makes for a vicious circular reference when trying to implement a linear acceleration sensor. In fact, the Android Sensor.TYPE_LINEAR_ACCELERATION is terrible at measuring linear acceleration under the influence of a physical environment such as the acceleration of a car because of the circular reference. The magnetic sensor is another option, but is limited by the effects of hard and soft iron offsets and it cannot measure pitch, only roll and yaw... so it isn't always reliable, either. It can take a lot of effort, fine tuning and possibly multiple sensor fusions and calibrations to get reliable estimations.

Despite being compensated for drift, the Sensor.TYPE_GYROSCOPE drifts almost as much as Sensor.TYPE_GYROSCOPE_UNCALIBRATED... which is not drift compensated. This makes estimating rotation angles over time almost impossible, so controlling a game character or attempting to determine linear acceleration doesn't work after short periods of time. You can see this in the project Gyroscope Explorer in the Play Store, on GitHub and on our blog.

*You can see the result of some quick rotations of a Nexus 4 followed by putting the device on a flat surface. The calibrated sensor performed only slightly better than the uncalibrated versions. External vibrations, as you might expect, cause erroneous estimations as well.*

## Sensor Fusions

Sensor fusions take measurements from multiple sensors and fuse them together to create a better estimation than either sensor could do by itself. The most common type of sensor fusion to determine linear acceleration is an accelerometer and gyroscope, which measures the rotation of the device. If you know the rotation of the device and the acceleration of gravity, you can determine the tilt of the device and subtract that from the measured acceleration to get linear acceleration. A similar approach can be taken with the acceleration sensor and magnetic sensor.

## Magnetic Field

The Earth's magnetic field is a magnetic field that extends from the Earth's interior up into space where it meets a stream of charged particles emanating from the Sun, known as the solar wind. The magnetic field is not fixed, although it moves very slowly, and is generated by the molten iron alloys in the Earths outer core. The magnetic field can be measured and has a magnitude of 25 to 65 microteslas depending where on Earth the measurement is taken. Most devices do have magnetometers, which measure the magnetic field of earth. By measuring the magnetic field of the earth with the magnetometer, we can get a reasonable estimation of the yaw/azimuth of the device.

## Gravitational Field

The Gravitational field is like the magnetic field in that it can be measured since we know there is an acceleration due to gravity of about 9.8 m/s^{2} down at every point on Earth. Another way of saying this is that the magnitude (which we can determine by summing the absolute value of the three axes of the acceleration sensor) of the Earth's gravitational field is 9.8 m/s^{2} down at all points on Earth.

Gravitational field: g = F/m where F is the force of gravity.

We can draw a field-line pattern to reflect that, near the Earth's surface, the field is uniform. The strength of a field is reflected by the density of field lines - a uniform field has equally-spaced field lines.

## Coordinate Systems:

There are a number of coordinate systems to be aware of when developing with Android devices.

### World Coordinate System:

The world coordinate system in Android is the ENU (east, north, up) coordinate system. This is *different* from the NED (north, east, down) coordinate system that is commonly used in aviation.

### Local Coordinate System:

The local coordinate system describes the coordinate system of the device. For most sensors, the coordinate system is defined relative to the device's screen when the device is held in its default orientation (see figure 1). When a device is held in its default orientation, the X axis is horizontal and points to the right, the Y axis is vertical and points up, and the Z axis points toward the outside of the screen face. In this system, coordinates behind the screen have negative Z values. The most important point to understand about this coordinate system is that the axes are not swapped when the device's screen orientation changes—that is, the sensor's coordinate system never changes as the device moves.

### Other Coordinate Systems:

The method SensorManager.getOrientation(), which is commonly used to get the orientation vector of the device from a rotation matrix, uses a third reference coordinate system that is not the world coordinate system. A WND (west, north, down) coordinate system is used, which is different from both the ENU and NED coordinate systems that are more common. Also worth noting is that the order of the axis returned in the method are different from those returned by the sensors.

When SensorManager.getOrientation() returns, the array values is filled with the result:

- values[0]: azimuth, rotation around the Z axis.
- values[1]: pitch, rotation around the X axis.
- values[2]: roll, rotation around the Y axis.

## Low Pass Filters:

Note: You can read all about Low Pass Filters and how they work in this blog post.

A low-pass filter is a type of electronics filter that attempts to pass low-frequency signals through the filter unchanged while reducing the amplitude of signals with a frequency above what is known as the cutoff frequency. They are used for all sorts of things including audio, images, signal processing and digital filters.

The concept of a low-pass filter comes from electrical engineering. It is simply a resistor in series with the load and a capacitor in parallel with the load. The capacitor essentially acts a load spring, absorbing high-frequencies, but blocking low-frequencies and forcing them through the load instead. This is because at low-frequencies the capacitor has time to charge so it is equal to in the input voltage forcing the signal to the output. At high frequencies, the capacitor does not have enough time to charge, allowing the signal to go to ground instead of through the output.

The resistance and the capacitance define the time constant (T = RC) which then determines the cut-off frequency (the frequency at which the filter attenuates the signal).

### The Low-Pass Filter Code:

*Note how the sample period is calculated using an averaging method. This is done because the sample period between two consecutive updates (sampleTimeNew-sampleTimeOld) is very inconsistent. The averaging method gives a much more steady estimation of the sample period.

// Time constant in seconds static final float timeConstant = 0.297f; private float alpha = 0.0f; private float timestamp = System.nanoTime(); private float timestampOld = System.nanoTime(); private float output[]; private int count = 0; public void LowPass(float[] input) { timestamp = System.nanoTime(); // Find the sample period (between updates). // Convert from nanoseconds to seconds dt = 1 / (count / ((timestamp - timestampOld) / 1000000000.0f)); count++; alpha = timeConstant / (timeConstant + dt); // Calculate alpha alpha = dt/(timeConstant + dt); // Update the filter // y[i] = y[i] + alpha * (x[i] - y[i]) output[0] = output[0] + alpha * (input[0] - output[0]); output[1] = output[1] + alpha * (input[1] - output[1]); output[2] = output[2] + alpha * (input[2] - output[2]); }

## Complementary Filters:

Complementary filter can be thought of as something of a combination of a complementary high-pass filter and a low-pass filter. The first part of the equation, alpha*gravity, acts as a high-pass filter. The second part of the equation (1-alpha)*event.values, acts as a low pass filter. The complementary part of the filter means that both parts of the filter should add to 1, which is where the (1-alpha) comes from. It occurs to me that this is probably not the case unless you normalize the inputs and outputs to 1 with a partition function before applying the filter, but it is still an effective low-pass filter.

We can use this concept of a complementary filter to fuse two different sensor measurements together. In the case of Fused Gyroscope Explorer, we will take the orientation from the gyroscope and fuse it with the orientation from the acceleration and magnetic sensor.

## Getting the Orientations from the Sensors:

We will need to get the initial rotation matrix from the acceleration and magnetic sensors. Most folks will want to start with a standard basis oriented to the world frame (East, North, Up) based on the current orientation of the device. This requires determining the current orientation of the device in the world coordinate system and can be done with the acceleration and magnetic sensors with a call to SensorManager.getRotationMatrix(). The matrix that is returned will then be multiplied by the delta rotation matrix that is produced from the gyroscope measurements to produce a new rotation matrix representing the current orientation.

If you do not take this step, you can create an initial orientation based on the rotation of the device when the algorithm starts with the identity matrix. This will create a world frame that is oriented to the initial rotation of the device and all further device frame rotations will be relative to the initial world frame. The identity matrix will be multiplied by the delta rotation matrix that is produced from the gyroscope measurements and the integration method. The identity matrix will be multiplied by the delta rotation matrix that is produced from the gyroscope measurements to produce a new rotation matrix representing the current orientation

### The Orientation Code:

private void calculateInitialOrientation() { hasInitialOrientation = SensorManager.getRotationMatrix( initialRotationMatrix, null, acceleration, magnetic); }

## Integration Over Time:

The gist of the integration comes from the Android Developer documentation. First, the axis-angle representation is normailzed (a requirement of quaternions) by taking the magnitude of the vector and then dividing each element in the vector by the magnitude. The derivative is then taken with respect to time and then the axis-angle representation is converted into a quaternion. The quaternion is then converted into a rotation matrix (via SensorManager.getRotationMatrixFromVector()) which can be applied to another rotation matrix representing the current orientation of the device. Finally, the matrix representing the current orientation is converted into a vector representing the orientation of the device (via SensorManager.getOrienation()).

Some things that aren't explained very well in the documentation...

Why divide each element in the axis-angle vector by the magnitude of the vector? SensorManager.getRotationMatrixFromVector() requires a unit quaternion and, by definition, the three elements of the rotation vector are equal to the last three components of a unit quaternion <cos(θ/2), x*sin(θ/2), y*sin(θ/2), z*sin(θ/2)>. A vector, 'u' is a unit quanternion if its norm is one, | u | = 1. We normalize the axis-angle vector so we can convert it into a quaternion.

Why is the sensors delta time (sensor period, dt, time between updates, etc...) divided by 2? This comes from the θ/2 in each of the quaternions elements because we want half angles for our quaternions. Instead of dividing theta by 2 in each of the four elements, we divide it once when we determine the sensors delta time.

What is a good value for EPSILON? A value that is just large enough to normal the vector, 'u', so very small, just larger than 0.

## The Integration Code:

private static final float NS2S = 1.0f / 1000000000.0f; private final float[] deltaRotationVector = new float[4](); private float timestamp; public void onSensorChanged(SensorEvent event) { // This timestep's delta rotation to be multiplied by the current rotation // after computing it from the gyro sample data. if (timestamp != 0) { final float dT = (event.timestamp - timestamp) * NS2S; // Axis of the rotation sample, not normalized yet. float axisX = event.values[0]; float axisY = event.values[1]; float axisZ = event.values[2]; // Calculate the angular speed of the sample float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ); // Normalize the rotation vector if it's big enough to get the axis if (omegaMagnitude > EPSILON) { axisX /= omegaMagnitude; axisY /= omegaMagnitude; axisZ /= omegaMagnitude; } // Integrate around this axis with the angular speed by the timestep // in order to get a delta rotation from this sample over the timestep // We will convert this axis-angle representation of the delta rotation // into a quaternion before turning it into the rotation matrix. float thetaOverTwo = omegaMagnitude * dT / 2.0f; float sinThetaOverTwo = sin(thetaOverTwo); float cosThetaOverTwo = cos(thetaOverTwo); deltaRotationVector[0] = sinThetaOverTwo * axisX; deltaRotationVector[1] = sinThetaOverTwo * axisY; deltaRotationVector[2] = sinThetaOverTwo * axisZ; deltaRotationVector[3] = cosThetaOverTwo; } timestamp = event.timestamp; float[] deltaRotationMatrix = new float[9]; SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector); // User code should concatenate the delta rotation we computed with the current rotation // in order to get the updated rotation. // rotationCurrent = rotationCurrent * deltaRotationMatrix; }

## The Concatenate the Rotation Code:

We can apply a rotation matrix to another rotation matrix by multiplying the two rotation matrices together. This is how the orientations of the axis-angle vectors produced by the gyroscope are integrated. A rotation matrix representing each axis-angle vector is produced by the gyroscope and then applied to the rotation matrix representing the last known orientation of the device. The easiest way to do this is to create a function to multiply two 3x3 matrices together.

private float[] matrixMultiplication(float[] a, float[] b) { float[] result = new float[9]; result[0] = a[0] * b[0] + a[1] * b[3] + a[2] * b[6]; result[1] = a[0] * b[1] + a[1] * b[4] + a[2] * b[7]; result[2] = a[0] * b[2] + a[1] * b[5] + a[2] * b[8]; result[3] = a[3] * b[0] + a[4] * b[3] + a[5] * b[6]; result[4] = a[3] * b[1] + a[4] * b[4] + a[5] * b[7]; result[5] = a[3] * b[2] + a[4] * b[5] + a[5] * b[8]; result[6] = a[6] * b[0] + a[7] * b[3] + a[8] * b[6]; result[7] = a[6] * b[1] + a[7] * b[4] + a[8] * b[7]; result[8] = a[6] * b[2] + a[7] * b[5] + a[8] * b[8]; return result; }

## The Orientation Code:

Once we have our new rotation matrix, all we have to do is make a call to SensorManager.getOrientation() to get the orientation of the device. Note that the reference coordinate-system used is different from the world coordinate-system defined for the rotation matrix and the order of the axis returned by the method are different from the order returned by the sensors.

SensorManager.getOrientation(currentRotationMatrixCalibrated, gyroscopeOrientationCalibrated);

## The Complementary Filter Fusion:

Now the two orientations need to be fused via complementary filter. Note that this code comes from Paul Lawitzki and I have modified it to be slightly more efficient and readable. But credit for the algorithm itself goes to Paul.

/** * Calculate the fused orientation. */ private void calculateFusedOrientation() { float oneMinusCoeff = (1.0f - FILTER_COEFFICIENT); /* * Fix for 179° <--> -179° transition problem: Check whether one of the * two orientation angles (gyro or accMag) is negative while the other * one is positive. If so, add 360° (2 * math.PI) to the negative value, * perform the sensor fusion, and remove the 360° from the result if it * is greater than 180°. This stabilizes the output in * positive-to-negative-transition cases. */ // azimuth if (gyroOrientation[0] < -0.5 * Math.PI && orientation[0] > 0.0) { fusedOrientation[0] = (float) (FILTER_COEFFICIENT * (gyroOrientation[0] + 2.0 * Math.PI) + oneMinusCoeff * orientation[0]); fusedOrientation[0] -= (fusedOrientation[0] > Math.PI) ? 2.0 * Math.PI : 0; } else if (orientation[0] < -0.5 * Math.PI && gyroOrientation[0] > 0.0) { fusedOrientation[0] = (float) (FILTER_COEFFICIENT * gyroOrientation[0] + oneMinusCoeff * (orientation[0] + 2.0 * Math.PI)); fusedOrientation[0] -= (fusedOrientation[0] > Math.PI) ? 2.0 * Math.PI : 0; } else { fusedOrientation[0] = FILTER_COEFFICIENT * gyroOrientation[0] + oneMinusCoeff * orientation[0]; } // pitch if (gyroOrientation[1] < -0.5 * Math.PI && orientation[1] > 0.0) { fusedOrientation[1] = (float) (FILTER_COEFFICIENT * (gyroOrientation[1] + 2.0 * Math.PI) + oneMinusCoeff * orientation[1]); fusedOrientation[1] -= (fusedOrientation[1] > Math.PI) ? 2.0 * Math.PI : 0; } else if (orientation[1] < -0.5 * Math.PI && gyroOrientation[1] > 0.0) { fusedOrientation[1] = (float) (FILTER_COEFFICIENT * gyroOrientation[1] + oneMinusCoeff * (orientation[1] + 2.0 * Math.PI)); fusedOrientation[1] -= (fusedOrientation[1] > Math.PI) ? 2.0 * Math.PI : 0; } else { fusedOrientation[1] = FILTER_COEFFICIENT * gyroOrientation[1] + oneMinusCoeff * orientation[1]; } // roll if (gyroOrientation[2] < -0.5 * Math.PI && orientation[2] > 0.0) { fusedOrientation[2] = (float) (FILTER_COEFFICIENT * (gyroOrientation[2] + 2.0 * Math.PI) + oneMinusCoeff * orientation[2]); fusedOrientation[2] -= (fusedOrientation[2] > Math.PI) ? 2.0 * Math.PI : 0; } else if (orientation[2] < -0.5 * Math.PI && gyroOrientation[2] > 0.0) { fusedOrientation[2] = (float) (FILTER_COEFFICIENT * gyroOrientation[2] + oneMinusCoeff * (orientation[2] + 2.0 * Math.PI)); fusedOrientation[2] -= (fusedOrientation[2] > Math.PI) ? 2.0 * Math.PI : 0; } else { fusedOrientation[2] = FILTER_COEFFICIENT * gyroOrientation[2] + oneMinusCoeff * orientation[2]; } // overwrite gyro matrix and orientation with fused orientation // to comensate gyro drift gyroMatrix = getRotationMatrixFromOrientation(fusedOrientation); System.arraycopy(fusedOrientation, 0, gyroOrientation, 0, 3); }

## The Result:

The result is a very stable estimation of the rotation of the device. No more drift. Keep in mind that this algorithm is still subject to failing under extended periods of linear acceleration or large distortions in the magnetic field, but it is certainly better than the gyroscope alone.