CONTEXTuAALS | Context-Aware Audio Labeling
GOAL
Designing a framework for context informed sound labeling based on audio-visual recognition matching to make more semantically and human-driven classifications.
Designing a framework for context informed sound labeling based on audio-visual recognition matching to make more semantically and human-driven classifications.
ABSTRACT
The way humans perceive and label sounds to objects is different from how we train our machines to classify them. How can we bridge the divide between these two understandings to create more meaningful machine learning design solutions?
The way humans perceive and label sounds to objects is different from how we train our machines to classify them. How can we bridge the divide between these two understandings to create more meaningful machine learning design solutions?
Augmented Humans
Fall 2022
Fall 2022
duration
6 weeks
6 weeks
tools
Ubicoustics
YOLOv3
AfterEffects
Figma
Ubicoustics
YOLOv3
AfterEffects
Figma
areas of research
human augmentation, selective sensing, sound localization
human augmentation, selective sensing, sound localization
More documentation coming soon :-)