What happend with test_policy and vbd challs?

What happens

I’m having problems with test_policy again mainly with challenges of type vbd, I want to understand why it is not counting the challenges but it still accepts the test

What do you understand or find about that problem

Previously I had problems with test_policy due to the structure that these challenges must have and thanks to @pastel-code I was able to understand a little more here.

Now I think I am having the challenges well structured but I keep getting the same errors

You make any workaround? What did you do?

I have seen that this problem has happened to many users and there is a lot of information about it to be able to solve it by yourself but still the information is confusing, for example, @pastel-code says the structure should be

site -> challenge -> feature/evidence

And in this way it will coincide with the structure described in https://gitlab.com/autonomicmind/challenges/-/wikis/structure but an approver also pointed to another user with the same problem as me in https://gitlab.com/autonomicmind/challenges/-/issues/269 that the structure should be in this way

make in only 1 folder with this structure <CWE>-<ToE>-<vulnerability>

Coincidentally the errors of most users are when trying to push hack challenges I think I have the correct structure, however I think it is due to the configuration of the test_policy, reading a bit the code that manages the test I see a function that gets the deviation

def get_deviation(solutions: Dict[str, int], policy: Any) -> int:
    code_active: bool = policy['code']['active']
    hack_active: bool = policy['hack']['active']
    vbd_active: bool = policy['vbd']['active']

    code_to_hack: int = abs(solutions['code'] - solutions['hack']) \
        if code_active and hack_active else 0
    code_to_vbd: int = abs(solutions['code'] - solutions['vbd']) \
        if code_active and vbd_active else 0
    hack_to_vbd: int = abs(solutions['hack'] - solutions['vbd']) \
        if hack_active and vbd_active else 0

    return max(code_to_hack, code_to_vbd, hack_to_vbd)

And from what I see of that function it seems that some cases are missing, like:


I also think that there is a bug in having certain numbers in the deviation variables of the challenges because like the user here currently I am having the same problem with the same composition of solutions and when trying to push a hack challenge too (this my job)

Again I say it may be my structure but reading a bit of the code I see use of functions that tend to have undefined behaviors such as abs()

def is_deviation_valid(last_commit: Any, policies: str, username: str) -> bool:
    with open(policies, 'r') as raw_policies:
        yaml: Any = YAML()
        data_policies: Any = yaml.load(raw_policies)
        user_policy: Any = get_policy_by_user(username, data_policies)
        last_solution_path: str = get_solution_path(last_commit, username)
        new_solutions: List[str] = glob.glob(f'**/**/**/{username}.*')
        old_solutions: List[str] = \
            [x for x in new_solutions if x != last_solution_path]
        new_unique_solutions: Dict[str, int] = \
        old_unique_solutions: Dict[str, int] = \
        new_deviation: int = get_deviation(new_unique_solutions, user_policy)
        old_deviation: int = get_deviation(old_unique_solutions, user_policy)
        exp_deviation: int = user_policy["deviation"]
        if abs(new_deviation - exp_deviation) \
                > abs(old_deviation - exp_deviation) \
                and old_deviation != exp_deviation:
            log_err(f'Your old deviation was: {old_deviation}')
            log_err(f'Your new deviation is: {new_deviation}')
            log_err(f'Your expected deviation is: {exp_deviation}')
            log_err(f'Your unique solutions are: {new_unique_solutions}')
            log_err(f'Your new deviation should be closer to your '
                    'expected deviation')
            return False
    return True


I need help with

Since this has happened to me before, I would like to know if the challenges that I have merged are valid for the test_policy, also I think that it should not pass the test if the structure is not correct and finally I would like to know if there is any way for control the deviation, I mean, to know what type of challenge should I send exactly

Hi, the cases that you say that are missing aren’t missing because suppose this:

code: 4
hack: 3

code_to_hack = abs(4 - 3) = 1
hack_to_code = abs(3 - 4) = 1

We didn’t need those cases because they have duplicated results from the existing cases.

On the other hand, the test policy tries to eval the challenges that it can read, for that thing the structure is important, because if the structure is wrong the policy didn’t know what needs to read and probably pass because not found anything to test.

But thinking about it can be a great thing implementing a fail if the policy didn’t find the challenge because that means that the structure is wrong.

But if you want to know more about the test you can read the policy like you are doing or ask to @infinite-loop that create the policy.

Thanks you very much.

But thinking about it can be a great thing implementing a fail if the policy didn’t find the challenge because that means that the structure is wrong.

The truth is that it would be a good idea, on the other hand, could you tell me if my merged vbd challenges so far have the structure correctly? the previous challenges I have fixed but I would like to be sure if it really worked

You can run test_policy locally, you can check that easily, try deleting the solution and check how many solutions the policy shows for that scope and try again adding the solution again and rerun the policy.

The command should be:

$ ./build.sh test_policy

You must have nix installed to run that

And for this, we are open to contributions, you can contribute with that, open an issue, and discussed it, and you can upload the changes if all are agreed.

Locally I am getting a different error:

I removed all my solutions from vbd and I tested the build.sh with test_policy but now it seems the error is different from the deviation issue

I do not understand what is happening

Modifying the code I realize that the solutions have nothing to do with my branch

Another interesting thing to note is that in this commit https://gitlab.com/autonomicmind/challenges/-/commit/ec85114645d6b0ca2d88c729824dafa6bc9cadb3 It should have failed test_policy like in the following but it didn’t, it failed me for the test_others and test_generic, I try to know if the structure is wrong and it fails for another reason

And trying to send a hack challenge to test (this commit) I get the deviation error, but locally it works fine for me


I think that for the test you should have the new solution committed and ahead in the branch (with your username) and with that, you can run fine the policy, check that your local is updated with the remote, probably you didn’t have the same files in local and remote.

If with that you found inconsistencies you can use the screenshots, logs, changes in the code, etc to open an issue and discuss it.

It is up to date

If you look, I print the branch name and my solutions (which are obviously not correct), in pipeline it says I have {‘code’: 5, ‘hack’: 6, ‘vbd’: 4} and now in local it says I have {’ code ‘: 4,’ hack ‘: 5,’ vbd ': 5} and it’s up to date also I have more inconsistencies

How it seem in code you already have around 7-8 unique solutions, because the solutions in go isn’t unique anymore.

but for any reason the policy only takes 4-5 solutions as unique but that can happens in the other scopes too, that’s because the logic in the ranking is different to the logic in the policy, the ranking scans all files for every scope and show it, for that reason you can have the wrong structure and the ranking count your solution, but the policy not.

Is weird that you have different results in local and remote, you could ask to @infinite-loop he can have better answers in that than me.

1 Like

Hi, I opened an issue explaining why the inconsistencies presented by test policy, you can check here: https://gitlab.com/autonomicmind/challenges/-/issues/289