While you won’t find anything suggesting this in author guidelines or journal policies, research has shown that if there are non-significant findings in a statistical analysis, there’s a chance it will be rejected by reviewers. This is a problem within the academic landscape.
However, there are increasingly more remedies towards this, all under the banner of open science. Ideally, you want reviewers to judge the research question, methods, design, and everything around that. But you don’t want them to judge the results.
Given that researchers are human, this can be difficult. As a reviewer, you do see the results if they’re not blinded in some way, and this can guide your review. If a hypothesis results in a non-significant finding, it could be due to chance. That’s how statistics works. However, many may be biased and think the hypothesis was bad, or the research design was flawed, giving them a reason to reject the research.
As I mentioned, there are remedies for this. You could potentially submit your research without the findings. This is like a pre-registered study, and it can certainly help. Also, it’s crucial to frame your paper in the direction of your findings, making it clear from the beginning what your research contributes to.
In the end, and this relates to the question you’ve been asking, the number one reason why editors reject papers is because of a missing fit. Depending on how you frame your paper and how your conclusions advance the field of that journal, it has a significant effect on the likelihood of your work being published.