Skip to main content

Customising a font based on your own voice?


Yes, you read it right.
I was browsing Behance a few days ago and stumbled upon this gem of a project by Ogilvy New York called TypeVoice.





What it does is basically it allow users to create their own typeface using their own voice.

"Because we use different parameters, you get a number of results when you interact with our experience. Yell, laugh, and whistle into TypeVoice and you'll find yourself surprised." Ogilvy New York's creative director, Chris Rowson told Co.Create.



This project was made to celebrate The Webby Awards People's 20th anniversary (which is in the year 2016). The Webbys Awards is the leading international award honoring excellence on the Internet, which includes Websites, Advertising & Media, Online Film & Video, Mobile Sites & Apps and Social. that is presented by The International Academy of Digital Arts and Sciences (consisting of over two thousand industry experts and technology innovators). 



With the mentality of 'championing the voice of the people', creative directors Chris Rowson and Bastien Baumann along with their team came up with this project that gives a literal meaning to the phrase The Webby Awards believed in.



Basically what they did was they created an algorithm that measured the user's pitch, volume and time-lapsed, allowing each letter to react in realtime (using two independent SVG animation timelines).

The website itself used GSAP to create two independent timelines - or axes of motion, one representing the pitch and the other volume. The site was built in ES6 vanilla Javascript, uses GSAP and CSS animations for SVG images, and everything weighs only 1.5mb.

Upon landing at TypeVoice.net, the site will ask the user for permission to use their microphone. After allowing it, users can then follow speaking prompts with random words like "yass" or "sausages". Once the typeface has been created, users then can even customize further each letter by altering the speech patterns.

Here are some examples people had tried on the website.


When I first stumble upon this project I was completely in awe. Maybe because seeing something interactive and yet a unique experience for different users is still new to me and I find it very interesting. Another reason why I was so interested in this because I wanted to know more about how something; in this case, the typeface, can be created in real-time based on the different inputs (the volume and pitch of the user's voice). Figuring out GSAP  and SVG animations might help me with my research and development project I'm currently working on.

Unfortunately, the site is not accessible at the time this blog is written.
"The site has run it's course throughout the campaign and is no longer live in our domain."
Justin Au. Typography, motion and user experience designer of the project.

GIFs and Video via TypeVoice
The technicalities mentioned above are based on their Behance page.

Comments

Post a Comment

Popular posts from this blog

TouchDesigner Experiment: Inserting OSC data with OSCIn

From one of my last experiment , I tried to change the data input. Instead of using audio, I replaced it with OSC data from my Muse headset. To connect your OSC device to TouchDesigner, make sure that the IP address and port number is the same so the data transfer can be accurate. In this case, I use a third-party app called Mind Monitor (available on iOS devices) to connect my Muse headset to the TouchDesigner software. Below are screenshots and videos from my experiment. You can see that the brainwave data is already recorded in real-time in the software. Then I used the alpha, beta, and theta brainwave to change the movement of the visuals (the chosen brainwave data I used are just placeholders for now to see the movement). Then the data is connected with 'noise', which is like the fluid/abstract visuals you see on the background. I also set the colors to be moving/changing over time.

TouchDesigner Experiment: Audio Reactive Particle Cloud

My second experiment with TouchDesigner is creating this audio-reactive particle visual by following Bileam Tschepe 's tutorial on Youtube. Again, I just followed his tutorial step by step. This tutorial is a little different because it uses both audio and visuals. The visual follows the music in real-time. Other than audio, we are also introduced with the element 'Math' to add the channels of the audio together. This is the end product. Music is FriendsV2 by Muskatt.

Arduino Experiments (pt. 1)

Earlier this week, I bought my first Arduino Kit . Since this is week 6 of term, our lecturer has asked us to display an experiment by week 7. It could be any type of experiment, and it doesn't even have to be related to our project. The reason why I picked to try on Arduino is that I think it's the closest medium I'll be using for my actual work. But I was kind of worried about this, mainly it is because I've never used or tried it before. I've only heard about it but never actually played around with it. So this is the kit I bought. It costs SGD 70 at one of the shops in Sim Lim Tower. It contains a lot of items to try out experiments. It also includes several tutorial cards that can be successfully running with the items provided in the kit. For more detailed tutorials and the codes of each project, we are asked to go to their website and search for the complete step by step tutorials. Thus far, I have tried several projects, and for ...