Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed when processing a large file (13MB) #174

Open
chivychao opened this issue Oct 23, 2024 · 5 comments
Open

Failed when processing a large file (13MB) #174

chivychao opened this issue Oct 23, 2024 · 5 comments

Comments

@chivychao
Copy link

When processing a large source code file, an error occurred approximately 2 hours into the process.

The error message is as follows:

Is this purely a network transmission issue? Is there a way to retry from the breakpoint?

Processing: 0%
file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56815
const result = (_a = response.choices[0].message) == null ? void 0 : _a.content;
^

TypeError: Cannot read properties of null (reading '0')
at file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56815:46
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async visitAllIdentifiers (file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56696:21)
at async file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56804:12
at async unminify (file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:206:27)
at async Command. (file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56876:3)

Node.js v20.18.0

@0xdevalias
Copy link

Is there a way to retry from the breakpoint?

Just to tie together a bunch of the potentially related issues:

This seems to be the section of code for implementing better throttling/retry logic (at least for the openai plugin):

Also:

Resume-ability would also be a good thing to consider.

Some of the discussion in the following issue could tangentially relate to resumability (specifically if a consistent 'map' of renames was created, perhaps that could also show which sections of the code hadn't yet been processed):

Originally posted by @0xdevalias in #167 (comment)

And there is some hacky workaround implementation of throttling code in this comment:

@0xdevalias
Copy link

0xdevalias commented Oct 23, 2024

The error message is as follows:

Is this purely a network transmission issue? Is there a way to retry from the breakpoint?

Processing: 0%
file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56815
const result = (_a = response.choices[0].message) == null ? void 0 : _a.content;
^
TypeError: Cannot read properties of null (reading '0')

While I'm not 100% sure.. I can trigger the same error message in a contrived way with:

const response = { choices: null }

response.choices[0].message

// VM2853:3 Uncaught TypeError: Cannot read properties of null (reading '0')
//    at <anonymous>:3:17

Which suggests that the response.choices is null.

Based on the code, this seems to be using the OpenAI rename plugin, with this bit in particular hitting the error:

const response = await client.chat.completions.create(
toRenamePrompt(name, surroundingCode, model)
);
const result = response.choices[0].message?.content;
if (!result) {
throw new Error("Failed to rename", { cause: response });
}

You would probably need to debug log/inspect the full value of response to see exactly what was returned to determine what the best fix here is; but I suspect the root cause is going to be the OpenAI API returning a response in an unexpected format, and the humanify code not being robust enough to account for that.

A very quick/dirty hacky workaround would be something like this:

        const response = await client.chat.completions.create(
          toRenamePrompt(name, surroundingCode, model)
        );
-       const result = response.choices[0].message?.content;
+       const result = response?.choices?.[0]?.message?.content;
        if (!result) {
          throw new Error("Failed to rename", { cause: response });
        }

@chivychao Does this happen consistently for you, or was it a once off?

@chivychao
Copy link
Author

The error message is as follows:
Is this purely a network transmission issue? Is there a way to retry from the breakpoint?

Processing: 0%
file:///mnt/c/Users/zhaozhiwei/AppData/Roaming/npm/node_modules/humanifyjs/dist/index.mjs:56815
const result = (_a = response.choices[0].message) == null ? void 0 : _a.content;
^
TypeError: Cannot read properties of null (reading '0')

While I'm not 100% sure.. I can trigger the same error message in a contrived way with:

const response = { choices: null }

response.choices[0].message

// VM2853:3 Uncaught TypeError: Cannot read properties of null (reading '0')
//    at <anonymous>:3:17

Which suggests that the response.choices is null.

Based on the code, this seems to be using the OpenAI rename plugin, with this bit in particular hitting the error:

const response = await client.chat.completions.create(
toRenamePrompt(name, surroundingCode, model)
);
const result = response.choices[0].message?.content;
if (!result) {
throw new Error("Failed to rename", { cause: response });
}

You would probably need to debug log/inspect the full value of response to see exactly what was returned to determine what the best fix here is; but I suspect the root cause is going to be the OpenAI API returning a response in an unexpected format, and the humanify code not being robust enough to account for that.

A very quick/dirty hacky workaround would be something like this:

        const response = await client.chat.completions.create(
          toRenamePrompt(name, surroundingCode, model)
        );
-       const result = response.choices[0].message?.content;
+       const result = response?.choices?.[0]?.message?.content;
        if (!result) {
          throw new Error("Failed to rename", { cause: response });
        }

@chivychao Does this happen consistently for you, or was it a once off?

Currently, the error has only occurred once.

The response corresponding to the error is as follows:

{
  id: '',
  object: '',
  created: 0,
  model: '',
  choices: null,
  usage: {
    prompt_tokens: 0,
    completion_tokens: 0,
    total_tokens: 0,
    prompt_tokens_details: { cached_tokens: 0 },
    completion_tokens_details: { reasoning_tokens: 0 }
  },
  system_fingerprint: null
}

The response for other cases where there was no error is as follows:

{
  id: 'chatcmpl-AL9kKflYTv0estPl9u5g0tpSyGqsl',
  object: 'chat.completion',
  created: 1729605716,
  model: 'gpt-4o-mini-2024-07-18',
  choices: [
    {
      index: 0,
      message: [Object],
      logprobs: null,
      finish_reason: 'stop'
    }
  ],
  usage: {
    prompt_tokens: 168,
    completion_tokens: 7,
    total_tokens: 175,
    prompt_tokens_details: { cached_tokens: 0 },
    completion_tokens_details: { reasoning_tokens: 0 }
  },
  system_fingerprint: 'fp_482c22a7bc'
}

@brianjenkins94
Copy link

And there is some hacky workaround implementation of throttling code in this comment:

lol brutal 😆

It works just fine!

@0xdevalias
Copy link

0xdevalias commented Oct 24, 2024

lol brutal 😆

More just meant it looks like a quick workaround not like a fully polished / configurable integrated feature; didn't intend any disrespect by it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants