Using nock, is there a way to disable a single nock scope? I've been struggling with some tests that set up nocks of the same URL as some other tests. They both run fine separately, but when run in the same mocha session one of them fails, because I'm unable to re-nock the active nock scopes, meaning the nocks that were set up catches all the requests.
What I've tried:
before()
and then call scope.persist(false)
in my after()
, it only "unpersists" the scope, so that it's active for one more request. It does not immediately disable it. nock.cleanAll()
immediately disables the nocks so that they can be set up again, but then it also disables any global nocks that may have been set up once, common to all test cases.So far, the only solutions I've found are 1) use unique URL:s for all nocks, which isn't always possible or 2) use nock.cleanAll() and don't rely on any global nocks - instead make sure to only set up nocks in local before()
functions, including setting up the global ones repeatedly for every test that needs them.
It seems it would be highly useful to be able to do
scope = nock('http://somewhere.com').persist().get('/'.reply(200, 'foo');
and then use that nock in a bunch of tests, and finally do
scope.remove();
However, I've not been able to do something like this. Is it possible?
Example:
before(async () => {
nock('http://common').persist().get('/').reply(200, 'common');
});
after(async () => {
});
describe('Foo tests', () => {
let scope;
before(async () => {
scope = nock('http://mocked').persist().get('/').reply(200, 'foo');
});
after(() => {
// scope.persist(false); // This causes the Bar tests to use the Foo nocks one more time :(
// nock.cleanAll(); // This also disables the common nocks
});
it('Should get FOO', async () => {
expect(await fetch('http://mocked').then(res => res.text())).to.equal('foo');
expect(await fetch('http://common').then(res => res.text())).to.equal('common');
});
it('Should get FOO again', async () => {
expect(await fetch('http://mocked').then(res => res.text())).to.equal('foo');
expect(await fetch('http://common').then(res => res.text())).to.equal('common');
});
});
describe('Bar tests', () => {
let scope;
before(async () => {
scope = nock('http://mocked').persist().get('/').reply(200, 'bar');
});
after(() => {
// scope.persist(false);
// nock.cleanAll();
});
it('Should get BAR', async () => {
expect(await fetch('http://mocked').then(res => res.text())).to.equal('bar');
expect(await fetch('http://common').then(res => res.text())).to.equal('common');
});
it('Should get BAR again', async () => {
expect(await fetch('http://mocked').then(res => res.text())).to.equal('bar');
expect(await fetch('http://common').then(res => res.text())).to.equal('common');
});
});
These tests either fail the 3rd test if using scope.persist(false)
(since that test still gets the foo version), or fails tests 3 and 4 if using nock.cleanAll()
, since the common nocks are then removed.
I also had this issue and found a way to work around it by listening to the request
event emitted by the scope and removing the interceptor when the event is fired. Ideally, I think you should be listening to the replied
event but for some reason, that event wasn't firing when I tried it, not sure why. But the code below worked for me:
/**
* @jest-environment node
*/
const nock = require('nock');
describe('Test suite', () => {
test('Test case', async () => {
let interceptor1 = nock('https://example-url.com', {
reqHeaders: {
'Content-Type': 'text/xml',
soapaction: 'http://www.sample.com/servie/getOrders',
},
})
.post('/');
let interceptor2 = nock('https://example-url.com', {
reqHeaders: {
soapaction: 'http://www.sample.com/servie/getProducts',
},
})
.post('/');
let scope = interceptor1.replyWithFile(200, path.join(__dirname, './path1.xml'));
interceptor2.replyWithFile(200, path.join(__dirname, './path.xml'));
scope.on('request', (req, interceptor) => {
nock.removeInterceptor(interceptor1);
});
const resp = await asynccall();
expect(resp).toStrictEqual(exp);
});
});
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With