Thursday, February 9, 2017

Deep Learning Meetup 2017-1 in Munich

I could check out another deep learning meetup. This time it was hosted at Google's Isar Valley here in Munich. The three interesting walks were about the following topics:

  • Visual Sentiment Analysis with Deep Convolutional Neural Networks
    (by Dr. Damien Borth, DFKI)
  • Strategies for AI Deployment
    (by Henrik Klagges, TNG Consulting GmbH)
  • DGX-1 and SATURNV: The World’s Most Efficient Supercomputer for AI and Deep Learning
    (by Ralph Hinsche, Nvidia)
In the third talk of Nvidia, we were also able to hold a test sample of the latest Tesla P100 in our hands, which is one of the building blocks of Nvidia's deep learning super computers called DGX-1. This is a nice super toy that every AI-researcher would like to have under the Christmas tree. Unfortunately, a single device costs more than 100.000 US-Dollar.


Thursday, February 2, 2017

Intel AI Days 2017


I had the pleasure to check out Intel's first AI days in Europe. At ICM in Munich, Intel presented their latest advancements in Artificial Intelligence and Deep Learning in both hardware and software. As one of the biggest player in the hardware industry, they talked a lot about the next wave Xeon CPUs called Lake Crest, that is optimized for Deep Learning. Furthermore, a representative of Nervana Systems introduced their deep learning platform, which has been acquired by Intel for more than 400 Mio. US-Dollar in October 2016.


Additionally, they talked a lot about low-level optimizations that they have done in order to accelerate many deep learning using Intel hardware, such as Intel Math Kernel Library (MKL). In some examples, they shows amazing improvements by a factor of up to 400. This sounds to good to be true in my ears, but even half of that is more than welcome!
They presented their Neon framework, which feels to be in between TensorFlow and Keras, as well as a high-level and simple to use Intel Deep Learning SDK for non-programmers. The latter one is currently only suitable for image data. Along with this presentations, me and some others in the audience felt a little bit confused why Intel presents several deep learnings frameworks, and why not just focuses on a single one.


One last thing that kept in my mind after both days: bar charts, bar charts and even more bar charts. Intel kind of really loves bar charts. I couldn't stand them any longer after a while, especially because of their redundancy from one presentation to the next. Nevertheless, I'm excited if consumer market CPUs will also benefit from these advancements in the near future.

All presentation slides can be found HERE.

Monday, January 30, 2017

Universal App: PriceChecker

Not sure whether the product you would like to purchase is an awesome deal or just another rip-off price? Well, my new PriceChecker app for Windows 10 might be the perfect match for you! Simply scan the barcode on the price tag and compare the consumer reviews and price on Amazon.


The app is free of change and contains no adverts. So, what are you waiting for? Check it out and download it from the Windows Store...

 Download PriceChecker

Tuesday, January 24, 2017

UWPCore: A development acceleration framework for the Universal Windows Platform

Since it has proven stability and reliability in two successful Windows 10 project for more than a year, we thought about to open source our service-driven framework. Even when it has not reached version 1.0 yet, you can nevertheless use it for your next project right now. Check out the UWPCore Framework on Github. I developed this framework in course of the last year together with my friend Patrick Mutter.


More information about the framework is written down on the landing page of the repository. It even includes a short description of how to get started. In case you use our framework, and consider any kind of problem or bug, feel free to either open an issue on Github, or via a pull request.

Thursday, January 19, 2017

Update: Action Note 2.2

Action Note just received another update. Version 2.2 brings a couple of improvements:
  • Added support to set Action Note as default app for Notes using the "onenote-cmd" protocol (PC/Tablet only)
  • Updated UI of the sidebar-menu
  • Fixed minor UI issues
  • Added new languages: Dutch, Hungarian

Due to the included "onenote-cmd" protocoll binding, Action Note is now finally able to be set as the default app for the "Note" button within the Action Center. Unfortunately, the app has to be set as default manually. Furthermore, the default app settings are not available on Windows 10 Mobile yet.


By the way, this version of Action Note is powered on the (finally) first public release candidate of our framework for UWP based projects. As soon as the repository of the framework is public, I will post the link on this blog.

Friday, December 30, 2016

Update: Action Note 2.1

After receiving some user reports regarding a synchronization issue of the Action Center since the latest Windows 10 update, I had to publish just another update for Action Note.


After having a closer look, the reason of this problem was obvious, but not caused by the app itself. In one of the last updates, Microsoft enabled a new feature called Notification Mirroring, which synchronizes notifications across all devices using Cortana. Unfortunately, this was in conflict with Action Note's own cross-device online-sync feature.

The fix for this was actually easy: I was simply able to disable this mirroring feature for all Action Note notifications. Personally, I would suggest that Microsoft should not auto-enable this feature be default.

Beside that major fix, version 2.1 comes with a new alphabetical ordering option, which was requested by several users via email. Additionally, I updated the translations for the Polish and Swedish versions.

Enjoy!

Tuesday, November 8, 2016

TensorLight: A high-level framework for TensorFlow projects

In the course of the development of my Master's Thesis "Deep Learning Approaches to Predict Future Frames in Videos" at TUM, I realized that the high flexibility of TensorFlow has its price: boilerpate code. Many things that are needed in almost every neural network training or evaluation script have to be implemented over and over again. To that end, I started to implement a high-level API for Google's machine intelligence library, called TensorLight.

TensorLight

TensorLight comes with four guiding principles:

  • Simplicity: Straight-forward to use for anybody who has already worked with TensorFlow. Especially, no further learning is required regarding how to define a model's graph definition.
  • Compactness: Reduce boilerplate code, while keeping the transparency and flexibility of TensorFlow.
  • Standardization: Provide a standard way in respect to the implementation of models and datasets in order to save time. Further, it automates the whole training and validation process, but also provides hooks to maintain customizability.
  • Superiority: Enable advanced features that are not included in the TensorFlow API, as well as retain its full functionality.
The project solution of my thesis is almost entirely based on this framework. I was able to refactor and move about 99% of my training and evaluation code, as well as all the best practices I gained throughout this phase into it.