Message from JavaScript discussions

June 2018

— The cost is logarithmic, so for every thread added all other threads have additonal reduced speed

— 

So for only 2 threads, each thread runs roughly half as fast as it would by itself, at least on paper. Actual speeds will be slower than half I think, from kernel overhead

— The goal, then, is response time, rather than speed

— So you wouldn’t want to load up the kernel with CPU-bound work

— But even if you did, you still get the benefit of preemption, so CPU bound work can be context switched as soon as anything else needs the CPU

Message permanent page

— It’s a problem I have to figure out, unique to JS, caused by how I am doing context switching. Each kernel component reveals what I named the dispatchIterator interface, which generally take the form of infinite iterators that implicitly time-step work, and encapsulate the work so that the caller doesn’t need to know implementation details to run deferred work

Message permanent page

— The way I have it set up now, at a minimum there are about 5 layers of these dispatchers, each adding overhead to a context switch. Each one yields, in order, until the kernel has control of V8, which means they all get dumped to the memory heap, but this gets done a lot, so there is a lot of stress on memory, which is typical of exokernels like this one

Message permanent page

— With the preemtpion source code pre-processor in place, I could be looking at a GeneratorFunction “call stack” which is extremely high

Message permanent page

— :/

— Permiisi disini ada yg paham cara penggunaan google tag manager?

— Https://youtu.be/M3BM9TB-8yA

— Hi, how can easily get country iso code 2 using geolocation information ?