Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How does Twitter extract meaningful subject colors from image pixel data?

Let me first clarify the problem statement. Check out this tweet:

https://twitter.com/jungledragon/status/926894337761345538

Next, click the image itself within the tweet. In the light box that appears, the menu bar below it takes on a meaningful color that is based on the actual pixels in the image itself. Even in this stress test, this is a difficult image given all the light pixels, does it do a fine job in picking an overall color that 1) represents the content of the image 2) is dark/contrasty enough to place white text on it:

enter image description here

I was simultaneously implementing a similar system before I even knew Twitter had this. Check out a preview below:

enter image description here

The examples in the screenshot are optimistic, as there are plenty of situations where the background is too light. Even in seemingly positive examples as seen in my screenshot, most of the time it does not pass the AA or AAA contrast check.

My current approach:

  • One-time per image, a JS runs that calculates the average color of all pixels in the image. Note that the average color is not necessarily a meaningful color, such as in the edge case of the spider where the average would be close to white.
  • I store the RGB value in the database
  • Upon rendering the page (server-side) I dynamically set the background color of the image's caption using a formula

My formula is to convert the RGB to HSL, and then to manipulate in particular the S and L values. Given them a notch, using min/max values to set a treshold. I've tried countless combinations.

Yet it seems like a never-ending struggle because color darkness and contrast are subject to human perception.

Hence my curiosity on how Twitter seems to have nailed this, in particular two aspects:

  1. Finding a meaningful subject color (not the same as average or dominant color)
  2. Toning that meaningful color in a way that it remains recognizable (hue) yet is contrasty enough to place light text on it, whilst passing at least the AA contrast check.

I've searched around, but cannot find any information on their implementation. Anybody aware of they do it? Or other proven methods to solve this puzzle end-to-end?

like image 375
Fer Avatar asked Nov 11 '17 01:11

Fer


People also ask

How does the Twitter pixel Helper Work?

The Twitter pixel helper will automatically detect any Twitter pixels that successfully send data to Twitter. The badge number on the extension shows you at-a-glance how many pixels were detected on the page. What do the URL parameters mean?

How can I extract tweets based on certain topics from Twitter?

Using this data we can analyze various socio-economic factors that are currently prevailing and a lot more. Here is a way to extract tweets based on certain topics from Twitter API using the Tweepy library. For using the Twitter API you need to have a developer access Twitter account. Request for the same it might take 2–3 hours to get an approval.

How do I add the Twitter pixel ID to Tealium?

Within the Tealium UI, paste your pixel ID in the Twitter Pixel ID field. 8. Set your "Load Rules" to appropriately load the tag on the relevant pages of your website.

What is Twitter data and why does it matter?

Twitter data is the information collected by either the user, the access point, what’s in the post and how users view or use your post. While this might sound somewhat vague, it’s largely due to the massive amount of data that can be collected from a single Tweet.


1 Answers

I took a peek at Twitter's markup to see what I could find, and, after running a bit of code in the browser's console, it seems like Twitter takes a color average over a flat distribution of pixels in the images and scales each of the RGB channels to values of 64 and below. This provides a pretty fast way to create a high-contrast background for light text while still retaining a reasonable color match. From what I can tell, Twitter doesn't perform any advanced subject-color-detection, but I can't say for sure.

Here's a quick-and-dirty demo I made to validate this theory. The top and left borders that appear around the images initially display the color Twitter uses. After running the snippet, a bottom and right border appears with the calculated color. Requires 9+ for IE users.

function processImage(img)
{
    var imageCanvas = new ImageCanvas(img);
    var tally = new PixelTally();

    for (var y = 0; y < imageCanvas.height; y += config.interval) {
        for (var x = 0; x < imageCanvas.width; x += config.interval) {
            tally.record(imageCanvas.getPixelColor(x, y));
        }
    }

    var average = new ColorAverage(tally);

    img.style.borderRightColor = average.toRGBStyleString();
    img.style.borderBottomColor = average.toRGBStyleString();
}

function ImageCanvas(img)
{
    var canvas = document.createElement('canvas');

    this.context2d = canvas.getContext('2d');
    this.width = canvas.width = img.naturalWidth;
    this.height = canvas.height = img.naturalHeight;

    this.context2d.drawImage(img, 0, 0, this.width, this.height);

    this.getPixelColor = function (x, y) {
        var pixel = this.context2d.getImageData(x, y, 1, 1).data;

        return { red: pixel[0], green: pixel[1], blue: pixel[2] };
    }
}

function PixelTally()
{
    this.totalPixelCount = 0;
    this.colorPixelCount = 0;
    this.red = 0;
    this.green = 0;
    this.blue = 0;
    this.luminosity = 0;

    this.record = function (colors) {
        this.luminosity += this.calculateLuminosity(colors);
        this.totalPixelCount++;

        if (this.isGreyscale(colors)) {
            return;
        }

        this.red += colors.red;
        this.green += colors.green;
        this.blue += colors.blue;

        this.colorPixelCount++;
    };

    this.getAverage = function (colorName) {
        return this[colorName] / this.colorPixelCount;
    };

    this.getLuminosityAverage = function () {
        return this.luminosity / this.totalPixelCount;
    }

    this.getNormalizingDenominator = function () {
        return Math.max(this.red, this.green, this.blue) / this.colorPixelCount;
    };

    this.calculateLuminosity = function (colors) {
        return (colors.red + colors.green + colors.blue) / 3;
    };

    this.isGreyscale = function (colors) {
        return Math.abs(colors.red - colors.green) < config.greyscaleDistance
            && Math.abs(colors.red - colors.blue) < config.greyscaleDistance;
    };
}

function ColorAverage(tally)
{
    var lightness = config.lightness;
    var normal = tally.getNormalizingDenominator();
    var luminosityAverage = tally.getLuminosityAverage();

    // We won't scale the channels up to 64 for darker images:
    if (luminosityAverage < lightness) {
        lightness = luminosityAverage;
    }

    this.red = (tally.getAverage('red') / normal) * lightness
    this.green = (tally.getAverage('green') / normal) * lightness
    this.blue = (tally.getAverage('blue') / normal) * lightness

    this.toRGBStyleString = function () {
        return 'rgb('
            + Math.round(this.red) + ','
            + Math.round(this.green) + ','
            + Math.round(this.blue) + ')';
    };
}

function Configuration()
{
    this.lightness = 64;
    this.interval = 100;
    this.greyscaleDistance = 15;
}

var config = new Configuration();
var indicator = document.getElementById('indicator');

document.addEventListener('DOMContentLoaded', function () {
    document.forms[0].addEventListener('submit', function (event) {
        event.preventDefault();

        config.lightness = Number(this.elements['lightness'].value);
        config.interval = Number(this.elements['interval'].value);
        config.greyscaleDistance = Number(this.elements['greyscale'].value);

        indicator.style.visibility = 'visible';

        setTimeout(function () {
            processImage(document.getElementById('image1'));
            processImage(document.getElementById('image2'));
            processImage(document.getElementById('image3'));
            processImage(document.getElementById('image4'));
            processImage(document.getElementById('image5'));

            indicator.style.visibility = 'hidden';
        }, 50);
    });
});
label { display: block; }
img { border-width: 20px; border-style: solid; width: 200px; height: 200px; }
#image1 { border-color: rgb(64, 54, 47) white white rgb(64, 54, 47); }
#image2 { border-color: rgb(46, 64, 17) white white rgb(46, 64, 17); }
#image3 { border-color: rgb(64, 59, 46) white white rgb(64, 59, 46); }
#image4 { border-color: rgb(36, 38, 20) white white rgb(36, 38, 20); }
#image5 { border-color: rgb(45, 53, 64) white white rgb(45, 53, 64); }
#indicator { visibility: hidden; }
<form id="configuration_form">
    <p>
        <label>Lightness:
            <input name="lightness" type="number" min="1" max="255" value="64">
        </label>
        <label>Pixel Sample Interval:
            <input name="interval" type="number" min="1" max="255" value="100">
            (Lower values are slower)
        </label>
        <label>Greyscale Distance:
            <input name="greyscale" type="number" min="1" max="255" value="15">
        </label>
        <button type="submit">Run</button> (Wait for images to load first!)
    </p>
    <p id="indicator">Running...this may take a few moments.</p>
</form>

<p>
    <img id="image1" crossorigin="Anonymous" src="https://pbs.twimg.com/media/DNz9fNqWAAAtoGu.jpg:large">
    <img id="image2" crossorigin="Anonymous" src="https://pbs.twimg.com/media/DOdX8AGXUAAYYmq.jpg:large">
    <img id="image3" crossorigin="Anonymous" src="https://pbs.twimg.com/media/DOYp0HQX4AEWcnI.jpg:large">
    <img id="image4" crossorigin="Anonymous" src="https://pbs.twimg.com/media/DOQm1NzXkAEwxG7.jpg:large">
    <img id="image5" crossorigin="Anonymous" src="https://pbs.twimg.com/media/DN6gVnpXUAIxlxw.jpg:large">
</p>

The code ignores white, black, and grey-ish pixels when determining the dominant color from the image which gives us a more vivid saturation despite reducing the brightness of the color. The computed color is pretty close to the original color from Twitter for most of the images.

We can improve this experiment by changing which parts of the image we calculate the average color from. The example above selects pixels uniformly across the whole image, but we can try using only pixels near the edges of the image—so the color blends more seamlessly—or we can try averaging color values from the center of the image to highlight the subject. I'll expand on the code and update this answer later when I have some more time.

like image 83
Cy Rossignol Avatar answered Oct 20 '22 01:10

Cy Rossignol