• How mobile artificial intelligence is making smartphones smarter by using less data

    Date:6 March 2018 Author: Lindsey Schutters Tags:,

    The machines aren’t ready to replace us, yet. They can beat us at Dota 2 and Go, but can’t make up new games for us to play. Machines, algorithms and code is ultimately created by humans to execute predefined tasks. But we are making them smarter. And this definition of smart doesn’t mean “connected to the internet.” No, this is on-device artificial intelligence and it comes at a considerable cost. These new processors and functions are the reason smartphones have breached the R20 000 mark.
    Forget Siri and Google Assistant, artificial intelligence on hardware level is incredible, but not quite useful enough to eliminate a human. Well you still, for instance, need to press the shutter key and actually point the camera at something for the Huawei Mate 10 Pro’s headline AI feature to work. Just like the iPhone X (ten) is spotty with the Face ID on the first day or so of use and then becomes nearly infallible.

    Both those features work on the concept of machine learning via the neural processors built into the respective processing units. Like the way you build neural pathways when you learn a new fact or skill and then strengthen those pathways through repetition or revision, so to do these new age CPUs. The result? Sometimes astonishing photographs and super secure device unlock methods.

    Huawei were first to the party with the Kirin 970 processor and its neural processing unit built on top. This processor builds on a concept that huawei used on the Kirin 960 devices like the Mate 9 and the P10 where the device analyses your usage to then predict behaviour and allocate resources.

    That rudimentary AI now has dedicated hardware to even better recognise your use patterns. Engineers also fed the NPU with millions of carefully labelled images and the system established 13 scenarios that it can now automatically recognise and adjust the camera for. It doesn’t work flawlessly, like it can’t identify my dogs as dogs (maybe they don’t have Australian Cattle Dogs or Alsatians in China) and it mistook a bunch of flowers for a plate of food. But when it works it’s ridiculously good, even better than I could manage in manual mode. But, crucially, I still had to choose to take the picture.

    The AI didn’t recognise this dog as a dog and the flowers registered as a plate of food.

    The simulated aperture subject isolation stuff also gets an AI shot in the arm for more precise edge detection and more even background blur. I can confirm that these improvements are significant over the likes of the P10 Plus which would fail to separate the area between your legs and lump that in the focal plain.

    All this machine learning is powered by the NPU which Huawei claims can process 1,92 TeraFLOPS using 16-bit floating point numbers. Remember, that’s built on top of an octa-core (4×2,4 GHz Cortex-A73 + 4×1,8 GHz Cortex-A53) CPU and a dodeca-core Mali-G72 MP12 GPU (graphics processor). That makes for a lot of real-time processing power.

    There are a few intelligent enhancements to the Leica engineering Huawei Mate 10 Pro camera.In this image it automatically recognised that this is a person and adjusted for the scene.

    Huawei put the AI powers of the Mate 10 to work in other areas, teaming up with Microsoft for real-time translation through the Translate app. Yes, that’s on-device translation in text and speech. Then there’s the really exciting AI noise cancelling. The device learns your voice over time, affording you the ability to then speak in a normal tone when you’re in noisy environments.

    While some of these applications seem like overkill and a little gimmicky, it’s an interesting glimpse into the future of mobile computing. Secure, hardware level ambient computing with a dollop of 90s nostalgia on top for good measure.

    Speaking of secure computing, Apple also equipped the iPhone X with new Face ID tech in the shape of a new camera array. The infrared dot projector flashes a matrix of contour-defining markers on your face and the infrared camera decodes it and checks it against the data it has in memory. If there’s a slight alteration to the face, the file then gets amended. This process is all done electively by the neural engine and is stored in the secure hardware enclave.

    The A11 Bionic neural processing extends to photography functions of the cameras as well. On the main camera it does some level of scene recognition and plays a big role in the image signal processing; iOS 11 has dramatically improved the iPhone’s high dynamic range functions, for instance. While the edge detection in Portrait Mode isn’t quite up to the standard that Google and Huawei has reached with its latest devices, the Portrait Lighting effects can deliver very pleasing results.
    Apple’s biggest problem seems to be an over reliance on hardware depth mapping. Portrait Mode on the selfie camera is only enabled on the iPhone X because of the depth data it gains from the Face ID sensors. Google on the other hand entrusts all of that processing to artificial intelligence.
    It’s been fascinating to watch the AI arms race graduate from voice assistants to tangible improvements in the actual tasks we buy these devices for. Camera technology advancement from generation to generation seems incremental at first glance, but we are now equipping phones with teachable hardware that can finally deliver on the promise of meaningful improvement through software updates. “Smart” now means “smart” in a way that makes logical sense.

    You may also like:



    Latest Issue :

    October 2018