I face a lot of errors with Cortext since a few weeks.
The thing is : I can’t identify the source of the problem without the complete logs. When clicking on “logs” I can only see the first lines (those without errors) then it is cut.
And in the main project page I can read :
Debug Log: Error! Log file not found.
Any ideas ?
Dear Vincent,
We do not see any increase of the fail rate these last days.
And the last improvement we have made regarding the logs, is to show only the beginning of the logs and the end in case of really big log (to avoid to load really too big log in the user browser). But there is still the ability to download the full log, and it has been made months ago.
So, it is strange, could you invite me in your project : lionel dot villard at esiee dot fr ? I am not sure to understand exactly the problem.
Best
lionel
Thanks four you quick response !
The thing is I see inconsistant behaviors, some script are failing some times then working again (with same parameters). Regarding the logs, I recently had that error with corpus terms indexer where there is a head of logs, no python error stack trace, but it failed (and the log file not found in main UI). Can you see if there is something wrong with my small terms list (latest tsv upload) ? I don’t, NaN should not be a problem, at least it worked before.
Since a long time I also saw a lot of errors with sashimi, but thought this was normal if tagged experimental. Mostly not helpful log messages. Recently, the error message I had with sashimi clearly seems like a bug (AttributeError: ‘Job’ object has no attribute ‘script_path’).
See my latest jobs, I just invited you to the project.
When I have this kind of errors, is there some git where I can create an issue ?
Apparently something went wrong during one of the steps you have performed: there is not any more a time variable, even if CorTexT Manager expects one (as you have had one after the initial parsing). You “just have” to parse your RIS data again.
Yes, Sashimi has evolve a lot during the last two weeks: it has been simplified and some new features have been added. It should be much more stable now.
I hope it helps
L
I don’t see how but yes it is possible I removed the ISIDate by error when I was cleaning unused fields…
It would be great if Cortext could detect that kind of problem and log it before it crashes without warnings !
Thanks again for your time !
Hello,
Still struggling, now I have an error with the “Epic Epoch” script, again nothing useful in the log.
I’m trying to produce a bump graph of 20 most frequent (indexed) terms. This was working all right last time I tried.
I made sure to check that my publication date field was not deleted.
Not sure but looking at the log file, it almost seems the script is executed twice :
2022-10-04 11:23:01 INFO : Script Epic Epoch Started
2022-10-04 11:23:01 INFO :
Data Description:
field: ISItermsCleanedTerms
Size of the Hierarchy: '20'
Normalization of frequency count: false
Dynamics:
Choose Original Timescale: Standard Periods
Number of time slices: '4'
time slices distribution: regular
Overlapping periods: false
sequencing: snapshot
2022-10-04 11:23:01 INFO :
Data Description:
field: ISItermsCleanedTerms
Size of the Hierarchy: ’20’
Normalization of frequency count: false
Dynamics:
Choose Original Timescale: Standard Periods
Number of time slices: ‘4’
time slices distribution: regular
Overlapping periods: false
sequencing: snapshot