Apple has published its very first artificial intelligence research paper online, following the announcement it made at NIPS 2016 earlier this month.
The (usually) secretive company has published its AI research paper, entitled “Learning from Simulated and Unsupervised Images through Adversarial Training,” on December 22. The paper’s lead author is Apple researcher Ashish Shrivastava, and co-authors are Tomas Pfister, Oncel Tuzel, Wenda Wang, Russ Webb and Josh Susskind.
The paper details on how to improve an algorithm’s ability to identify images using computer-generated images rather than real-world images, since the former is already labeled and annoted, and doesn’t require manual work. However, the computer-generated, synthetic image data “is often not realistic enough, leading the network to learn details only present in synthetic images and fail to generalize well on real images,” according to the research.
Therefore, the gist here is to boost the realism of an image through what Apple researchers call Stimulated and Unsupervised learning. The Apple researchers use a forked version of a new machine learning technique, which pits two neural networks against each other, called Generative Adversarial Networks.
You can view the research paper here.