Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

SIGSEGV from spawn child_process in AWS Lambda function

I'm trying to spawn a synchronous child process (to run ffprobe) in AWS Lambda function, but it dies almost instantly (200ms) with signal SIGSEGV.

My understanding of a segmentation fault is that it is a process that is trying to access memory it isn't allowed to access. I tried increasing the memory to 1024MB (I was using 128MB as each execution only uses about 56MB), but this didn't change anything.

I'm aware I'm not the only person who has had this issue: https://forums.aws.amazon.com/thread.jspa?threadID=229397

Anyone know how to resolve this?

Update 25/4/2016

For clarity, the code I am running is:

import { spawnSync } from 'child_process';

exports.handler = (event, context) => {
  process.env.PATH = `${process.env.PATH}:${process.env.LAMBDA_TASK_ROOT}`;
  const ffprobe = './ffprobe';

  const bucket = event.Records[0].s3.bucket.name;
  const key = event.Records[0].s3.object.key;
  console.log(`bucket: ${bucket}`);
  console.log(`key: ${key}`);

  const url = 'http://my-clip-url.com'; // An s3 presigned url.
    if (!url) {
      throw new Error('Clip does not exist.');
    }

    const command = `-show_format -show_streams -print_format json ${url}`;

    try {
      const child = spawnSync(ffprobe, command.split(' '));
      console.log(`stdout: ${child.stdout.toString()}`)
      console.log(`stderr: ${child.stderr.toString()}`)
      console.log(`status: ${child.status.toString()}`)
      console.log(`signal: ${child.signal.toString()}`)
    } catch (exception) {
      console.log(`Process crashed! Error: ${exception}`);
    }
};

The output of which is:

START RequestId: 6d72847 Version: $LATEST

2016-04-25T19:32:26.154Z    6d72847 stdout: 
2016-04-25T19:32:26.155Z    6d72847 stderr: 
2016-04-25T19:32:26.155Z    6d72847 status: 0
2016-04-25T19:32:26.155Z    6d72847 signal: SIGSEGV
END RequestId: 6d72847
REPORT RequestId: 6d72847   Duration: 4151.10 ms    Billed Duration: 4200 ms    Memory Size: 256 MB Max Memory Used: 84 MB  

I am using the Serverless framework to babelify and deploy my code.

NOTE: I have tried running this binary on ami-bff32ccc instance on EC2 (http://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html) and it works. So it must be something I'm doing (how I'm executing ffprobe).

like image 802
Chris Paton Avatar asked Apr 20 '16 16:04

Chris Paton


2 Answers

Try this. Have your Lambda function spawn a bash shell that does this:

ulimit -c unlimited cd /tmp $LAMBDA_TASK_ROOT/ffprobe ...

Then check for a file named "/tmp/core" and if it exists, copy it to an S3 bucket (or whatever), and use gdb to analyse it on your development system or an EC2 host. I haven't verified this myself, but I do know that by default ulimit will be zero and core files will dump to the current directory. Note that these details are subject to change without notice (and if memory serves me, has changed recently.)

Of course, the "cd" could happen in the lambda function. If nodejs provides a way to set ulimit, it could happen there, too.

[Edit: the correct pattern is /tmp/core.%e.%p, see "man core" to interpret.]

like image 166
Jeff Learman Avatar answered Oct 03 '22 20:10

Jeff Learman


The version of ffprobe I was using I got from John Van Sickle's site and while it worked when I ran it on Amazon Linux EC2 instances, it wouldn't work on AWS Lambda.

Following Jeff Learman's advice, I built my own version using this wonderful script on the current version of the environment used by AWS Lambda as described here. I then deployed it with my Lambda function and it worked first time! :)

like image 31
Chris Paton Avatar answered Oct 03 '22 18:10

Chris Paton