Identity documents are traditionally designed to be verified in 3D environments. In 3D environments you can shine a UV light to reveal security features, feel the document texture, or see if a photo has been stuck on. But with 6.5 billion online transactions needing verification each year, we need to adapt to verifying in remote 2D environments.
This means that we need to not only adapt to verifying a 2D document, but a capture of a 2D document. A capture of a document that might have been taken in bad light and with a bad camera. With so many variables in play, how can you perform the best analysis?