Thursday, January 19, 2017

Update: Action Note 2.2

Action Note just received another update. Version 2.2 brings a couple of improvements:
  • Added support to set Action Note as default app for Notes using the "onenote-cmd" protocol (PC/Tablet only)
  • Updated UI of the sidebar-menu
  • Fixed minor UI issues
  • Added new languages: Dutch, Hungarian

Due to the included "onenote-cmd" protocoll binding, Action Note is now finally able to be set as the default app for the "Note" button within the Action Center. Unfortunately, the app has to be set as default manually. Furthermore, the default app settings are not available on Windows 10 Mobile yet.

By the way, this version of Action Note is powered on the (finally) first public release candidate of our framework for UWP based projects. As soon as the repository of the framework is public, I will post the link on this blog.

Friday, December 30, 2016

Update: Action Note 2.1

After receiving some user reports regarding a synchronization issue of the Action Center since the latest Windows 10 update, I had to publish just another update for Action Note.

After having a closer look, the reason of this problem was obvious, but not caused by the app itself. In one of the last updates, Microsoft enabled a new feature called Notification Mirroring, which synchronizes notifications across all devices using Cortana. Unfortunately, this was in conflict with Action Note's own cross-device online-sync feature.

The fix for this was actually easy: I was simply able to disable this mirroring feature for all Action Note notifications. Personally, I would suggest that Microsoft should not auto-enable this feature be default.

Beside that major fix, version 2.1 comes with a new alphabetical ordering option, which was requested by several users via email. Additionally, I updated the translations for the Polish and Swedish versions.


Tuesday, November 8, 2016

TensorLight: A high-level framework for TensorFlow projects

In the course of the development of my Master's Thesis "Deep Learning Approaches to Predict Future Frames in Videos" at TUM, I realized that the high flexibility of TensorFlow has its price: boilerpate code. Many things that are needed in almost every neural network training or evaluation script have to be implemented over and over again. To that end, I started to implement a high-level API for Google's machine intelligence library, called TensorLight.


TensorLight comes with four guiding principles:

  • Simplicity: Straight-forward to use for anybody who has already worked with TensorFlow. Especially, no further learning is required regarding how to define a model's graph definition.
  • Compactness: Reduce boilerplate code, while keeping the transparency and flexibility of TensorFlow.
  • Standardization: Provide a standard way in respect to the implementation of models and datasets in order to save time. Further, it automates the whole training and validation process, but also provides hooks to maintain customizability.
  • Superiority: Enable advanced features that are not included in the TensorFlow API, as well as retain its full functionality.
The project solution of my thesis is almost entirely based on this framework. I was able to refactor and move about 99% of my training and evaluation code, as well as all the best practices I gained throughout this phase into it.

Monday, October 17, 2016

Deep Learning Approaches to Predict Future Frames in Videos

I finally finished my Master's Thesis in the Computer Vision chair at TUM. In the course of this thesis, I analyzed existing deep learning approaches to predict future frames in videos. Based on these findings and other modern deep learning practices, such as batch normalization, scheduled sampling to improve recurrent network training or ConvLSTMs, we were able to reach or event outperform state-of-the-art performance in future frame generation.

So far, many people asked me about the practical application of frame prediction. Unfortunately, it won't tell us the end of any cliff-hanger movie such as Inception, but the main purpose of such a system is not to generate a perfect forecast of the long-term continuation of any movie clip. This completely impossible in my opinion, since there is not always a wrong or right in many situations. A neural network cannot be able to predict every decision made by all objects inside the scene. Furthermore, the pose of the camera or the environment could change unexpectedly. More interestingly, the trained model has to be able to distinguish foreground and background, as well as encode the content and dynamics of the frame sequence. In this thesis, we use this learned representations to predict the possible future frame sequence, using a completely unsupervised learning process. Parts of this trained neural network could be reused in any supervised learning task, such as action recognition in videos. Also, a similar model architecture could be used to train a neural network that is able to generate high speed videos using frame interpolation, or in context of video compression.

But let's get back to the application example that was used within the thesis: future frame prediction in videos. To assess the model, we used three different datasets with increasing complexity. First, we used MovingMNIST using two digits (left). Surprisingly, the model also delivered good results when we performed an out-of-domain test using one or three digits (right).


In a second experiment, we used video game recording of MsPacman. As it can be seen in the prediction example below, our trained 2-layer ConvLSTM Encoder-Predictor model is able to capture several dynamics of the game, such as the movement of Pacman and the ghosts, the blinking of the big dot in the top-right corner, as well as es fact that Pacman is eating the dots within the maze.

In our last experiment, we trained our model on the UCF-101 training set. This is a much harder problem, since the environment comes with unlimited possibilities, the camera could exhibits movement and/or rotation, and so on. Like many other solutions, we can notice a blur-effect in the generated future frames, even that we take advantage of perceptual motivated loss terms, such as SSIM or GDL. However, some results look satisfactory nevertheless. As an example, the zooming of the camera is captured and correctly continued in the soccer example below.

Of course, there is much more to tell. But the main intention of this post is to provide a rough idea about what has been done, as well to show some prediction examples of my trained recurrent decoder-encoder network. In case would like to know more about it, just have a look at my written Master's Thesis or write a comment to this post.

Saturday, October 15, 2016

Language change...

I personally think it is time to switch to English. I'm personally not sure why I waited so long for this. But the advantages of using English in my posts are definitely dominant. To name just a few, it obviously reaches more people, as well as helps me to improve my own English writing skills. We never learn out! ;-)

Saturday, June 18, 2016

Update: Action Note 1.15

Soeben wurde ein neues Update Action Note veröffentlicht. Besonderen Wert wurde dabei auf das zahlreiche Nutze-Feedback der letzten Monate gelegt.

Das Update ist dieses mal sehr umfassend und beinhaltet folgende Verbesserungen:
  • Behebt Fehler, dass eine Notiz seit dem letzten Update nicht gespeichert werden konnte, wenn „automatisches Speichern“ deaktiviert wurde
  • Möglichkeit einzelne Notizen im Action Center zu verstecken
  • Neue Einstellung zum Definieren der Standard-Kategorie (Farbe) einer Notiz
  • Neue Einstellungen für die Personalisierung der Haupt-Kachel
  • Die Live-Kacheln heben nun auch wichtige Notizen mit dem Fahnen-Symbol hervor
  • Behebt Problem, dass eine Notiz hin und wieder mehrfach ausgewählt werden musste um die Bearbeiten-Seite zu öffnen
  • Verbessertes Kontextmenü, welches sich jetzt auch nicht erst öffnet, wenn man den Finger loslässt
  • Hinzufügen von Tooltips für Optionen, welche keinen Beschreibungstext haben
  • Anhänge, welche in der Free-Version oder bei deaktivierte Online-Sync erstellt wurden, werden nun korrekt hochgeladen, nachdem die Synchronisation wieder aktiviert wurde
  • Behebt kleineren Fehler, dass die Notizliste beim Starten doppelt aktualisiert wurde, was zu einem Bildflackern führte
  • Behebt kleineren Fehler, dass der Zustand des Hamburger-Menüs nicht korrekt gesetzt wurde, wenn die Größe App-Fensters geändert wurde

Saturday, May 21, 2016

Update: Action Note 1.14

In den letzten Wochen haben hunderte Emails bezüglich Action Note den Weg zu mit gefunden. Dabei wurden mir einige interessante Ideen und Anregungen, aber auch Fehler in Action Note mitgeteilt. Einen Teil davon finden mit dem heutigen Update Einzug in die App. Das Update enthält im Detail folgende Änderungen:
  • Neue Option, um die Notizen auf der Hauptseite / Archiv zu minimieren bzw. maximieren
  • Neue Sync-Einstellungen: 4 Stunden Intervall und manuell
  • Behebt Fehler, dass Notizen unnötig doppelt gespeichert wurden
  • Behebt Fehler, dass eine Notiz unter gewissen Umständen gespeichert wurde, obwohl "Verwerfen" ausgewählt wurde
  • Behebt Fehler, dass die Haupt-Kachel nicht aktualisiert wurde, wenn eine Notiz im Action Center gelöscht wurde
  • Weitere kleinere Fehlerbehebungen
  • Neue Sprache: Hebrew
Letztlich möchte ich mich an dieser Stelle noch bei Roie Karpo aus Israel für die bereitgestellte Übersetzung herzlich bedanken!