`
Here I list some of my more recent projects.
The projects range from a wearable device teaching users to play piano,
a user interface to generate gesture recognizers that show
a low false positive rate and a project on dolphin
communication. If you want to know more then provided on this
page please check my CV for more
information or directly email me. Especially if you are
interested in code or data for the experiments.
Some other stuff I did:
I am a software engineer in machine learning working in an advertisement / audience targeting team.
Technology I am using includes SQL, python, hack lang and C++.
I was a senior data scientist at the e-commerce company Shopify. I work along the
finance and growth organisation to build machine learning models and data models for online marketing
and other buisiness cases. Through work here I got into dimensional data modeling and mostly use pyspark.
And yes, I am still using XGBoost :D
I am a senior data scientist at the mostly German, professional social network Xing.
Most of the projects here focus on Machine Learning and Recommender Systems.
Some projects rerank our job and member recommendations and use Machine Learning
models such as XGBoost. Other project involves modeling job postings using word embeddings
to support semantic more like this queries
as well as classifying job postings into categories such as
industry, career level and discipline.
[RecSysNL] Talk
We also hosted the ACM recommender systems challenge 2016 and 2017.
The project's aim is to support behavior researchers
to understand and get insight into dolphin behavior and
their cognitive abilities. We do so by analyzing dolphins'
audible communication. We have a fruitful collaboration with
Denise Herzing a behavior researcher studying wild dolphins
for several years. The two main challenges we address
in the project are to establish a two way communication
between humans and dolphins using an underwater wearable
computer and to find patterns in underwater recordings of
audible dolphin communication.
In the top right of the image, one can
see a dolphin whistle. Whistles are one form of dolphin
communication. We built an algorithm to find the basic
composition of these whistles. Meaning, we detect atomic
events in dolphin communication and how their occur in
context. We do so by learning a mixture of hidden Markov
models. Some of the results are shown below the whistle
image (bottom right). For more information see our ICASSP 2014
paper or the Interspeech 2016 Paper. In our recent work we
use unsupervised deep learning for audio modeling. If you are interested
in that check our IJCNN 2020 paper.
You can also see the program in action presented by Densie and Thad on youtube and I also gave a talk about our deep learning work at the International Joint Conference on Neural Networks in 2020:
Gestures for interfaces should be short, pleasing, intuitive,
and easily recognized by a computer. However, it is a
challenge for interface designers to create gestures easily
distinguishable from users' normal movements.Our tool MAGIC
Summoning addresses this problem. Given a specific platform
and task, we gather a large database of unlabeled sensor data
captured in the environments in which the system will be used
(an "Everyday Gesture Library" or EGL). The EGL is quantized
and indexed via multi-dimensional Symbolic Aggregate
approXimation (SAX) to enable quick searching. MAGIC exploits
the SAX representation of the EGL to suggest gestures with a
low likelihood of false triggering. Suggested gestures are
ordered according to brevity and simplicity, freeing the
interface designer to focus on the user experience. Once a
gesture is selected, MAGIC can output synthetic examples of
the gesture to train a chosen classifier (for example, with a
hidden Markov model). If the interface designer suggests his
own gesture and provides several examples,MAGIC estimates how
accurately that gesture can be recognized and estimates its
false positive rate by comparing it against the natural
movements in the EGL. For more information see the JMLR or
Face and Gesture paper.
Mobile Music Touch (MMT) helps teach users to play piano melodies
while they perform other tasks. MMT is a lightweight, wireless
haptic music instruction system consisting of fingerless
gloves and a mobile Bluetooth enabled computing device, such
as a mobile phone. Passages to be learned are loaded into the
mobile phone and are played repeatedly while the user performs
other tasks. As each note of the music plays, vibrators on
each finger in the gloves activate, indicating which finger is
used to play each note. We present two studies on the efficacy
of MMT. The first measures 16 subjects' ability to play a
passage after using MMT for 30 minutes while performing a
reading comprehension test. The MMT system was significantly
more effective than a control condition where the passage was
played repeatedly but the subjects' fingers were not
vibrated. The second study compares the amount of time
required for 10 subjects to replay short, randomly generated
passages using passive training versus active
training. Participants with no piano experience could repeat
the passages after passive training while subjects with piano
experience often could not. For more information check the
CHI2010 or ISWC2010 paper.