CodeQL query - python query on 1000 repositories #90
queries
on: dynamic
setup
3s
Matrix: run
update-repo-tasks-statuses-cancelled
0s
update-repo-tasks-statuses-failure
0s
Annotations
5 errors
|
run (celery/celery, pymc-devs/pymc, Knio/dominate, ranger/ranger, bottlepy/bottle, tweepy/tweepy,...
The process /opt/hostedtoolcache/CodeQL/2.22.0/x64/codeql/codeql database run-queries --ram=14971 --additional-packs /home/runner/work/_temp/18cf7302-16f6-4776-bc24-98bbf2749ea4 -- /home/runner/work/codeql_controller/codeql_controller/324077Y5dx57/db getting-started/codeql-extra-queries-python exited with code 1
|
|
run (celery/celery, pymc-devs/pymc, Knio/dominate, ranger/ranger, bottlepy/bottle, tweepy/tweepy,...
The process /opt/hostedtoolcache/CodeQL/2.22.0/x64/codeql/codeql database run-queries --ram=14971 --additional-packs /home/runner/work/_temp/18cf7302-16f6-4776-bc24-98bbf2749ea4 -- /home/runner/work/codeql_controller/codeql_controller/301742tyvBsm/db getting-started/codeql-extra-queries-python exited with code 100
|
|
run (celery/celery, pymc-devs/pymc, Knio/dominate, ranger/ranger, bottlepy/bottle, tweepy/tweepy,...
Failed to run database run-queries --ram=14971 --additional-packs /home/runner/work/_temp/18cf7302-16f6-4776-bc24-98bbf2749ea4 -- /home/runner/work/codeql_controller/codeql_controller/301742tyvBsm/db getting-started/codeql-extra-queries-python:
[1/1] Loaded /home/runner/work/_temp/18cf7302-16f6-4776-bc24-98bbf2749ea4/new_global_online_2.qlx.
Starting evaluation of getting-started/codeql-extra-queries-python/new_global_online_2.ql.
Oops! A fatal internal error occurred. Details:
com.semmle.util.exception.CatastrophicError: An error occurred while evaluating MRO::ClassList.deduplicateCons/3#c7cf8125/4@i216#f271ezzs
Severe disk cache trouble (corruption or out of space) at /home/runner/work/codeql_controller/codeql_controller/301742tyvBsm/db/db-python/default/cache/pages/92/f6.pack: Failed to write item to disk
The RA to evaluate was:
{5} r1 = SCAN `MRO::ClassList.deduplicate/1#7d79ab0b#prev_delta` OUTPUT In.0, int _, int _, In.2, In.1
{4} | REWRITE WITH Out.1 := 0, Tmp.2 := 1, Out.2 := (In.4 - Tmp.2) KEEPING 4
{4} | JOIN WITH `MRO::ClassList.firstIndex/2#afe99fba#reorder_0_2_3_1#prev` ON FIRST 3 OUTPUT Lhs.0, Lhs.2, Rhs.3, Lhs.3
{4} r2 = SCAN `MRO::ClassList.firstIndex/2#afe99fba#reorder_0_2_3_1#prev_delta` OUTPUT In.1, In.0, In.2, In.3
{4} | JOIN WITH const_0 ON FIRST 1 OUTPUT Lhs.1, int _, Lhs.3, Lhs.2
{4} | REWRITE WITH Tmp.1 := 1, Out.1 := (InOut.3 + Tmp.1)
{4} | JOIN WITH `MRO::ClassList.deduplicate/1#7d79ab0b#prev` ON FIRST 2 OUTPUT Lhs.0, Lhs.3, Lhs.2, Rhs.2
{4} r3 = r1 UNION r2
{4} | AND NOT `MRO::ClassList.deduplicateCons/3#c7cf8125#prev`(FIRST 4)
return r3
(eventual cause: IOException "No space left on device")
at com.semmle.inmemory.pipeline.MetaPipelineInstance.wrapWithRaDump(MetaPipelineInstance.java:211)
at com.semmle.inmemory.pipeline.MetaPipelineInstance.exceptionCaught(MetaPipelineInstance.java:181)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.handleAndLog(ThreadableWork.java:593)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:410)
at com.semmle.inmemory.scheduler.RecursiveLayer$RecursiveWork.doWork(RecursiveLayer.java:514)
at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:396)
at com.semmle.inmemory.scheduler.execution.ExecutionScheduler.runnerMain(ExecutionScheduler.java:707)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: Severe disk cache trouble (corruption or out of space) at /home/runner/work/codeql_controller/codeql_controller/301742tyvBsm/db/db-python/default/cache/pages/92/f6.pack: Failed to write item to disk
(eventual cause: IOException "No space left on device")
at com.semmle.inmemory.caching.RelationCacheImpl.lambda$create$0(RelationCacheImpl.java:87)
at com.semmle.inmemory.caching.byhash.disk.OnDiskStore.put(OnDiskStore.java:136)
at com.semmle.inmemory.caching.byhash.interfaces.HashBasedCache.putIfPresent(HashBasedCache.java:29)
at com.semmle.inmemory.caching.byhash.evict.Evictor$ItemHandle.writeToDisk(Evictor.java:723)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at com.semmle.inmemory.caching.byhash.evict.Evictor.writeSelectedItems(Evictor.java:1059)
at com.semmle.inmemory.caching.byhash.evict.Evictor.reduceMemoryUsage(Evictor.java:459)
at com.semmle.inmemory.alloc.MemoryManager.reduceArraySpace(MemoryManager.java:342)
at com.semmle.inmemory.alloc.RigidArrayAllocator.allocateArrays(RigidArrayAllocator.java:178)
at com.semmle.inmemory.alloc.RigidArrayAllocator$1.<init>(RigidArrayAllocator.java:216)
at com.semmle.inmemory.alloc.RigidArrayAllocator.preallocate(RigidArrayAllocator.java:211)
at com.semmle.inmemory.alloc.MemoryManager.preallocate(MemoryManager.java:491)
at com.semmle.inmemory.caching.PagePrimitives.parseItem(PagePrimitives.java:90)
at com.semmle.inmemory.caching.byhash.disk.OnDiskStore.prepareLoading(OnDiskStore.java:256
|
|
run (mozilla/bleach, scrapy/scrapy, urwid/urwid, geopy/geopy, django-tastypie/django-tastypie, ah...
The operation was canceled.
|
|
run (mozilla/bleach, scrapy/scrapy, urwid/urwid, geopy/geopy, django-tastypie/django-tastypie, ah...
The job has exceeded the maximum execution time of 6h0m0s
|