TL;DR - I created an application that analyzes the lyrics in your Spotify playlists using IBM Watson Tone Analytics. Play with the app here. Look at the code here. Run it yourself locally or deploy it by following these instructions.

Before we get in too deep, you might want to read this other post… consider it a “part 1.” It explains setting up the auth flow with a Node + React + React-Router + Redux app. This post will only discuss integrating Watson Tone Analytics into an already authenticated application.

We’re going to be digging through an app that digs through your music. The app fetches song lyrics, sends them off to some fun-machine-learning-tone-analytics-algorithm and then visualizes the data in the UI. Tone analytics is a service that was trained on a bunch of Twitter data to determine if text is sad or angry or joyful etc. We’ll start with a high level overview of what’s going on, and then discuss key code snippets that power the more crucial parts of the application. Again, the app is here, and the code is here if you want to play around with either before finishing this post.

…why do this at all?

This is a question I ask myself often when throwing together tiny apps like this one. Honestly, the motivation behind this doesn’t stem much beyond it being fun. Music is cool and Spotify is rad and machine learning can be funny so why not throw them all together?

Just check out these results:


…pretty fun, right?

To use the app, simply log in and click on a playlist! That’s it! Watson gives us percentiles for the emotions found in Pixar’s Inside Out - Joy, Sadness, Anger, Disgust, and Fear. Once all your results are in you’ll be able to sort a metric by clicking on its header.



In this app, our Node server handles the Spotify Authentication Workflow, as well as gathering the lyric information and caching them in Cloudant. Once our client is authenticated, it uses its access token to query Spotify directly to load the playlist and track information. When the user loads a playlist, the client fires off requests to the node server, where it gathers the lyrics and then shoots them off to Watson for some tone analytics.

Getting the Lyrics

This turned out to be one of the trickiest parts of this whole operation; it’s shockingly difficult to acquire song lyrics… legally. I ended up using Musixmatch’s “free tier.” With their API key you can request 500 lyrics per day and obtain a whopping 30% of the song. Their terms of service also allows you to cache lyrics, so that’s a big plus.

Our flow then, as detailed in the architecture diagram above is:

  1. See if we already have the lyrics in our Cloudant cache
  2. If it’s there… great! Use it!
  3. If not, fetch the lyrics from Musixmatch, return those and put the results in our cache

The code for this looks like (from server/routes.js):

function getLyrics(track, artist, album) {
  return cloudant.get(track, artist, album).then(body => {
    return body.lyrics;
  }, e => {
    if (e.error === 'not_found') {
      return matchSong(track, artist, album)
        .then(id => getLyricsFromMusixMatch(id))
        .then(lyrics => {
          cloudant.insert(track, artist, album, lyrics);
          return lyrics;
    } else {
      throw e;

Now this does mean that we’re performing the tone analytics on only the first 30% of each song, but hey, at least we’re doing it legally.

Getting lyrics from Musixmatch is as simply as hitting their /matcher.track.get endpoint to get the song id, and then hitting /track.lyrics.get to get the lyrics so I’m not going to go into that here.

Getting the Tone

I go a little more in depth into using the Watson Tone Analytics API in my Ambient Sentiment post, so be sure to check that out. NPM makes our life a little easy here and our asynchronous tone fetching function looks like (from server/routes.js):

const watson = require('watson-developer-cloud');
const toneAnalyzer = watson.tone_analyzer(opts);
function toneAsync(text) {
  return new Promise((resolve, reject) => {
    toneAnalyzer.tone({ text }, (e, tone) => {
      if (e) {
      } else {

Putting it All Together

All our endpoint needs to do is get the lyrics and then feed the lyrics to Watson (he’s a growing boy and needs his veggies) (from server/routes.js):

router.get('/tone', (req, res) => {
  const { track, artist, album } = req.query;
  getLyrics(track, artist, album).then(lyrics => {
    return toneAsync(lyrics);
  }).then(tone => {
  }).catch(e => {

Client Code

The only thing I want to call out client-code-wise, apart from what I already blogged about in “part 1: authentication town,” is how the client loads a single playlist and gets each track’s tone information.

What’s happening here is we first issue the getPlaylist call from our spotifyApi module.

A quick tangent; our spotifyApi friend is initialized via:

import Spotify from 'spotify-web-api-js';
const spotifyApi = new Spotify();

And then, as detailed in my other post I keep bringing up, the access token get’s set during our authentication workflow:


Ok, so, once we have the tracks from spotifyApi.getPlaylist(), we make requests for each tracks tone information. We do this by hitting the /tone endpoint with the tracks name, album, and artist info (from client/actions/actions.js):

export function loadPlaylist(userID, playlistID) {
  return dispatch => {
    dispatch({ type: TRACK_LIST_BEGIN });
    spotifyApi.getPlaylist(userID, playlistID).then(data => {
      dispatch({ type: TRACK_LIST_SUCCESS, data }); => {
        const t = i.track;
        return asyncGet('/tone', {
          artist: =>', '),
        }).then(({ body }) => dispatch({
          type: TRACK_LIST_TONE,
          tone: body.document_tone.tone_categories,
        })).catch(error => {
          dispatch({ type: TRACK_LIST_TONE_ERROR, error, playlistID, id: });
    }).catch(error => {
      dispatch({ type: TRACK_LIST_FAILURE, error });

Crumbling Cookies

There we have it, another nice, sweet, and deliciously tasty mini-app. In case you skipped all the way to the end (because it’s 2016 and who reads instead of skimming) here are some fun links: