Issue with MR Rejection-slice:values out of range

What happens

My Merge Request (MR) was rejected due to a VM error: slice: values out of range in the MR test. I worked in lobster language. Strangely, local compilation with ‘makes’ and the pipeline were successful.
I tested while writing the code by reading a copy of the DATA.lst using:

(1) let data:string? = read_file("DATA.lst")
(2) let newStr:string = string(data)
(3) tokenize(newStr, "\n", " ")

This build with lobster username.lobster was successful. However, replacing (3) with tokenize(get_line(""), "\n", " ") causes problems with the slice function.

What do you understand or find about that problem

In my analysis, it seems that there is an empty array [ ] in the process, conflicting with the “slice” function within the same function where the data is read. I suspect the conflict arises when attempting operations like “slice” or “remove” without first returning the array obtained by tokenize(get_line(""), "\n", " ") .

Did you try any workaround? What did you do?

I conducted step-by-step tests to display the return in the CLI. It appears that an empty array [ ] is causing conflicts with “slice” when within the same function where the data is read. I believe the conflict can be resolved by ensuring a return is performed before executing operations like “slice” or “remove.”

Evidences

MR: https://gitlab.com/autonomicjump/challenges/-/merge_requests/16435/diffs

I need help with

I would appreciate confirmation if anyone else has encountered a similar problem. Additionally, I’m proposing a solution involving a return statement before executing operations like “slice” or “remove.” I plan to test this solution and will report back on its success.

Hi! I think the problem is that get_line("") and newStr are not the same. get_line("") is the first line of DATA.lst (if that is what you passed to standard input) and newStr is DATA.lst in its entirety.

Hi, cosmic-king
In my understanding, the problem arises because the read_file function takes the entire dataset from the input file, while the get_line function processes it line by line. Consequently, with read_file , the tokenize function creates an array with indices ranging from zero to index - 1. On the other hand, the get_line function generates an array for each line, starting and ending at index zero. This results in each array containing only one line of data. When I tested your code by changing index 0 to index 1, it ran successfully. I recommend considering a refactoring of your code to either operate on each line individually or create an array that encompasses the entire dataset. This adjustment may resolve the issue you’re facing.