Commit 428e152b authored by Ben Gamari's avatar Ben Gamari Committed by Ben Gamari

Use C99's bool

Test Plan: Validate on lots of platforms

Reviewers: erikd, simonmar, austin

Reviewed By: erikd, simonmar

Subscribers: michalt, thomie

Differential Revision: https://phabricator.haskell.org/D2699
parent 56d74515
......@@ -1641,13 +1641,13 @@ mkExtraObjToLinkIntoBinary dflags = do
<> text (show (rtsOptsEnabled dflags)) <> semi,
text " __conf.rts_opts_suggestions = "
<> text (if rtsOptsSuggestions dflags
then "rtsTrue"
else "rtsFalse") <> semi,
then "true"
else "false") <> semi,
case rtsOpts dflags of
Nothing -> Outputable.empty
Just opts -> text " __conf.rts_opts= " <>
text (show opts) <> semi,
text " __conf.rts_hs_main = rtsTrue;",
text " __conf.rts_hs_main = true;",
text " return hs_main(argc,argv,&ZCMain_main_closure,__conf);",
char '}',
char '\n' -- final newline, to keep gcc happy
......
......@@ -244,7 +244,7 @@ If retainer profiling is being performed, @ldvTime@ is equal to $0$,
and @LDV_recordUse()@ causes no side effect.\footnote{Due to this
interference with LDVU profiling, retainer profiling slows down a bit;
for instance, checking @ldvTime@ against $0$ in the above example
would always evaluate to @rtsFalse@ during retainer profiling.
would always evaluate to @false@ during retainer profiling.
However, this is the price to be paid for our decision not to employ a
separate field for LDVU profiling.}
......@@ -646,7 +646,7 @@ with LDVU profiling.
\begin{description}
\item[GC.c] invokes @LdvCensusForDead()@ before tidying up, sets @hasBeenAnyGC@ to
@rtsTrue@, and changes @copy()@ and @copyPart()@.
@true@, and changes @copy()@ and @copyPart()@.
Invokes @LDV_recordDead()@ and @LDV_recordDead_FILL_SLOP_DYNAMIC()@.
\item[Itimer.c] changes @handle_tick()@.
\item[LdvProfile.c] implements the LDVU profiling engine.
......
......@@ -508,7 +508,7 @@ set is created. Otherwise, a new retainer set is created.
\item[@retainerSet *addElement(retainer r, retainerSet *rs)@] returns a retainer set
@rs@ augmented with @r@. If such a retainer set already exists, no new retainer set
is created. Otherwise, a new retainer set is created.
\item[@rtsBool isMember(retainer r, retainerSet *rs)@] returns a boolean value
\item[@bool isMember(retainer r, retainerSet *rs)@] returns a boolean value
indicating whether @r@ is a member of @rs@.
\item[@void printRetainerSetShort(FILE *, retainerSet *)@] prints a single retainer
set.
......
......@@ -287,8 +287,8 @@ and returns it to the storage manager.
A macro in @include/StgStorage.h@.
\item[@ExtendNursery(hp, hplim)@] closes the current allocation area and
tries to find a new allocation area in the nursery.
If it succeeds, it sets @hp@ and @hplim@ appropriately and returns @rtsTrue@;
otherwise, it returns @rtsFalse@,
If it succeeds, it sets @hp@ and @hplim@ appropriately and returns @true@;
otherwise, it returns @false@,
which means that the nursery has been exhausted.
The new allocation area is not necessarily contiguous with the old one.
A macro in @Storage.h@.
......@@ -477,7 +477,7 @@ collector makes an efficient use of heap memory.
\item[@void *mark\_root(StgClosure **root)@] informs the garbage collector
that @*root@ is an object in the root set. It replaces @*root@ by
the new location of the object. @GC.c@.
\item[@void GarbageCollect(void (*get\_roots)(evac\_fn), rtsBool force\_major\_gc)@]
\item[@void GarbageCollect(void (*get\_roots)(evac\_fn), bool force\_major\_gc)@]
performs a garbage collection.
@get_roots()@ is a function which is called by the garbage collector when
it wishes to find all the objects in the root set (other than those
......@@ -487,9 +487,9 @@ Therefore it is incumbent on the caller to find the root set.
or not. If a major garbage collection is not required, the garbage collector
decides an oldest generation $g$ to garbage collect on its own.
@GC.c@.
\item[@rtsBool doYouWantToGC(void)@] returns @rtsTrue@ if the garbage
\item[@bool doYouWantToGC(void)@] returns @true@ if the garbage
collector is ready to perform a garbage collection. Specifically, it returns
@rtsTrue@ if the number of allocated blocks since the last garbage collection
@true@ if the number of allocated blocks since the last garbage collection
(@alloc_blocks@ in @Storage.c@) exceeds an approximate limit
(@alloc_blocks_lim@ in @Storage.c@).
@Storage.h@.
......@@ -700,11 +700,11 @@ The overall structure of a garbage collection is as follows:
During initialization, the garbage collector first decides which generation
to garbage collect.
Specifically,
if the argument @force_major_gc@ to @GarbageCollect()@ is @rtsFalse@,
if the argument @force_major_gc@ to @GarbageCollect()@ is @FALSE@,
it decides the greatest generation number $N$ such
that the number of blocks allocated in step $0$ of generation $N$ exceeds
@generations[@$N$@].max_blocks@.
If the argument @force_major_gc@ to @GarbageCollect()@ is @rtsTrue@,
If the argument @force_major_gc@ to @GarbageCollect()@ is @true@,
$N$ is set to the greatest generation number, namely,
$@RtsFlags.GcFlags.generations@ - 1$.
The garbage collector considers up to generation $N$ for garbage collection.
......@@ -805,7 +805,7 @@ The rationale is that the contents of @r@ cannot be updated any more,
and thus @r@ is always survived by @o@; @o@ is live as long as @r@ is.
Therefore, we wish @r@ to be evacuated to the same generation $M$ as @r@
currently resides (not to its next step).
If the evacuation succeeds (indicated by a @rtsFalse@ value of a variable
If the evacuation succeeds (indicated by a @FALSE@ value of a variable
@failed_to_evac@, declared in @GC.c@) for every object @o@, @r@ is removed
from the list @mut_once_list@ because it does not hold any backward
inter-generational pointers.\footnote{It turns out that @r@ can have only
......
......@@ -40,7 +40,7 @@ defaultsHook (void)
// This helps particularly with large compiles, but didn't work
// very well with earlier GHCs because it caused large amounts of
// fragmentation. See rts/sm/BlockAlloc.c:allocLargeChunk().
RtsFlags.GcFlags.heapSizeSuggestionAuto = rtsTrue;
RtsFlags.GcFlags.heapSizeSuggestionAuto = true;
RtsFlags.GcFlags.maxStkSize = 512*1024*1024 / sizeof(W_);
......
......@@ -50,6 +50,7 @@
CInt has the same size as an int in C on this platform
CLong has the same size as a long in C on this platform
CBool has the same size as a bool in C on this platform
--------------------------------------------------------------------------- */
......@@ -95,6 +96,8 @@
#error Unknown long size
#endif
#define CBool bits8
#define F_ float32
#define D_ float64
#define L_ bits64
......@@ -229,7 +232,7 @@
* Note the syntax is slightly different to the C version of this macro.
*/
#ifdef DEBUG
#define IF_DEBUG(c,s) if (RtsFlags_DebugFlags_##c(RtsFlags) != 0::I32) { s; }
#define IF_DEBUG(c,s) if (RtsFlags_DebugFlags_##c(RtsFlags) != 0::CBool) { s; }
#else
#define IF_DEBUG(c,s) /* nothing */
#endif
......
......@@ -45,22 +45,22 @@ typedef struct _GC_FLAGS {
uint32_t nurseryChunkSize; /* in *blocks* */
uint32_t minOldGenSize; /* in *blocks* */
uint32_t heapSizeSuggestion; /* in *blocks* */
rtsBool heapSizeSuggestionAuto;
bool heapSizeSuggestionAuto;
double oldGenFactor;
double pcFreeHeap;
uint32_t generations;
rtsBool squeezeUpdFrames;
bool squeezeUpdFrames;
rtsBool compact; /* True <=> "compact all the time" */
bool compact; /* True <=> "compact all the time" */
double compactThreshold;
rtsBool sweep; /* use "mostly mark-sweep" instead of copying
bool sweep; /* use "mostly mark-sweep" instead of copying
* for the oldest generation */
rtsBool ringBell;
bool ringBell;
Time idleGCDelayTime; /* units: TIME_RESOLUTION */
rtsBool doIdleGC;
bool doIdleGC;
StgWord heapBase; /* address to ask the OS for memory */
......@@ -72,29 +72,29 @@ typedef struct _GC_FLAGS {
* raise it again.
*/
rtsBool numa; /* Use NUMA */
bool numa; /* Use NUMA */
StgWord numaMask;
} GC_FLAGS;
/* See Note [Synchronization of flags and base APIs] */
typedef struct _DEBUG_FLAGS {
/* flags to control debugging output & extra checking in various subsystems */
rtsBool scheduler; /* 's' */
rtsBool interpreter; /* 'i' */
rtsBool weak; /* 'w' */
rtsBool gccafs; /* 'G' */
rtsBool gc; /* 'g' */
rtsBool block_alloc; /* 'b' */
rtsBool sanity; /* 'S' warning: might be expensive! */
rtsBool stable; /* 't' */
rtsBool prof; /* 'p' */
rtsBool linker; /* 'l' the object linker */
rtsBool apply; /* 'a' */
rtsBool stm; /* 'm' */
rtsBool squeeze; /* 'z' stack squeezing & lazy blackholing */
rtsBool hpc; /* 'c' coverage */
rtsBool sparks; /* 'r' */
rtsBool numa; /* '--debug-numa' */
bool scheduler; /* 's' */
bool interpreter; /* 'i' */
bool weak; /* 'w' */
bool gccafs; /* 'G' */
bool gc; /* 'g' */
bool block_alloc; /* 'b' */
bool sanity; /* 'S' warning: might be expensive! */
bool stable; /* 't' */
bool prof; /* 'p' */
bool linker; /* 'l' the object linker */
bool apply; /* 'a' */
bool stm; /* 'm' */
bool squeeze; /* 'z' stack squeezing & lazy blackholing */
bool hpc; /* 'c' coverage */
bool sparks; /* 'r' */
bool numa; /* '--debug-numa' */
} DEBUG_FLAGS;
/* See Note [Synchronization of flags and base APIs] */
......@@ -125,10 +125,10 @@ typedef struct _PROFILING_FLAGS {
Time heapProfileInterval; /* time between samples */
uint32_t heapProfileIntervalTicks; /* ticks between samples (derived) */
rtsBool includeTSOs;
bool includeTSOs;
rtsBool showCCSOnException;
bool showCCSOnException;
uint32_t maxRetainerSetSize;
......@@ -151,12 +151,12 @@ typedef struct _PROFILING_FLAGS {
/* See Note [Synchronization of flags and base APIs] */
typedef struct _TRACE_FLAGS {
int tracing;
rtsBool timestamp; /* show timestamp in stderr output */
rtsBool scheduler; /* trace scheduler events */
rtsBool gc; /* trace GC events */
rtsBool sparks_sampled; /* trace spark events by a sampled method */
rtsBool sparks_full; /* trace spark events 100% accurately */
rtsBool user; /* trace user events (emitted from Haskell code) */
bool timestamp; /* show timestamp in stderr output */
bool scheduler; /* trace scheduler events */
bool gc; /* trace GC events */
bool sparks_sampled; /* trace spark events by a sampled method */
bool sparks_full; /* trace spark events 100% accurately */
bool user; /* trace user events (emitted from Haskell code) */
} TRACE_FLAGS;
/* See Note [Synchronization of flags and base APIs] */
......@@ -177,8 +177,8 @@ typedef struct _CONCURRENT_FLAGS {
/* See Note [Synchronization of flags and base APIs] */
typedef struct _MISC_FLAGS {
Time tickInterval; /* units: TIME_RESOLUTION */
rtsBool install_signal_handlers;
rtsBool machineReadable;
bool install_signal_handlers;
bool machineReadable;
StgWord linkerMemBase; /* address to ask the OS for memory
* for the linker, NULL ==> off */
} MISC_FLAGS;
......@@ -186,12 +186,12 @@ typedef struct _MISC_FLAGS {
/* See Note [Synchronization of flags and base APIs] */
typedef struct _PAR_FLAGS {
uint32_t nCapabilities; /* number of threads to run simultaneously */
rtsBool migrate; /* migrate threads between capabilities */
bool migrate; /* migrate threads between capabilities */
uint32_t maxLocalSparks;
rtsBool parGcEnabled; /* enable parallel GC */
bool parGcEnabled; /* enable parallel GC */
uint32_t parGcGen; /* do parallel GC in this generation
* and higher only */
rtsBool parGcLoadBalancingEnabled;
bool parGcLoadBalancingEnabled;
/* enable load-balancing in the
* parallel GC */
uint32_t parGcLoadBalancingGen;
......@@ -209,12 +209,12 @@ typedef struct _PAR_FLAGS {
/* Use this many threads for parallel
* GC (default: use all nNodes). */
rtsBool setAffinity; /* force thread affinity with CPUs */
bool setAffinity; /* force thread affinity with CPUs */
} PAR_FLAGS;
/* See Note [Synchronization of flags and base APIs] */
typedef struct _TICKY_FLAGS {
rtsBool showTickyStats;
bool showTickyStats;
FILE *tickyFile;
} TICKY_FLAGS;
......
......@@ -20,7 +20,7 @@ typedef struct _HpcModuleInfo {
StgWord32 tickCount; // number of ticks
StgWord32 hashNo; // Hash number for this module's mix info
StgWord64 *tixArr; // tix Array; local for this module
rtsBool from_file; // data was read from the .tix file
bool from_file; // data was read from the .tix file
struct _HpcModuleInfo *next;
} HpcModuleInfo;
......
......@@ -171,17 +171,17 @@ typedef void OSThreadProcAttr OSThreadProc(void *);
extern int createOSThread ( OSThreadId* tid, char *name,
OSThreadProc *startProc, void *param);
extern rtsBool osThreadIsAlive ( OSThreadId id );
extern void interruptOSThread (OSThreadId id);
extern bool osThreadIsAlive ( OSThreadId id );
extern void interruptOSThread (OSThreadId id);
//
// Condition Variables
//
extern void initCondition ( Condition* pCond );
extern void closeCondition ( Condition* pCond );
extern rtsBool broadcastCondition ( Condition* pCond );
extern rtsBool signalCondition ( Condition* pCond );
extern rtsBool waitCondition ( Condition* pCond, Mutex* pMut );
extern bool broadcastCondition ( Condition* pCond );
extern bool signalCondition ( Condition* pCond );
extern bool waitCondition ( Condition* pCond, Mutex* pMut );
//
// Mutexes
......
......@@ -36,7 +36,7 @@ StgTSO *createStrictIOThread (Capability *cap, W_ stack_size,
StgClosure *closure);
// Suspending/resuming threads around foreign calls
void * suspendThread (StgRegTable *, rtsBool interruptible);
void * suspendThread (StgRegTable *, bool interruptible);
StgRegTable * resumeThread (void *);
//
......
......@@ -15,21 +15,17 @@
#define RTS_TYPES_H
#include <stddef.h>
#include <stdbool.h>
// Deprecated, use uint32_t instead.
typedef unsigned int nat __attribute__((deprecated)); /* uint32_t */
/* ullong (64|128-bit) type: only include if needed (not ANSI) */
#if defined(__GNUC__)
#if defined(__GNUC__)
#define LL(x) (x##LL)
#else
#define LL(x) (x##L)
#endif
typedef enum {
rtsFalse = 0,
rtsTrue
} rtsBool;
typedef struct StgClosure_ StgClosure;
typedef struct StgInfoTable_ StgInfoTable;
......
......@@ -258,18 +258,18 @@ TAG_CLOSURE(StgWord tag,StgClosure * p)
make sense...
-------------------------------------------------------------------------- */
INLINE_HEADER rtsBool LOOKS_LIKE_INFO_PTR_NOT_NULL (StgWord p)
INLINE_HEADER bool LOOKS_LIKE_INFO_PTR_NOT_NULL (StgWord p)
{
StgInfoTable *info = INFO_PTR_TO_STRUCT((StgInfoTable *)p);
return (info->type != INVALID_OBJECT && info->type < N_CLOSURE_TYPES) ? rtsTrue : rtsFalse;
return info->type != INVALID_OBJECT && info->type < N_CLOSURE_TYPES;
}
INLINE_HEADER rtsBool LOOKS_LIKE_INFO_PTR (StgWord p)
INLINE_HEADER bool LOOKS_LIKE_INFO_PTR (StgWord p)
{
return (p && (IS_FORWARDING_PTR(p) || LOOKS_LIKE_INFO_PTR_NOT_NULL(p))) ? rtsTrue : rtsFalse;
return p && (IS_FORWARDING_PTR(p) || LOOKS_LIKE_INFO_PTR_NOT_NULL(p));
}
INLINE_HEADER rtsBool LOOKS_LIKE_CLOSURE_PTR (const void *p)
INLINE_HEADER bool LOOKS_LIKE_CLOSURE_PTR (const void *p)
{
return LOOKS_LIKE_INFO_PTR((StgWord)
(UNTAG_CONST_CLOSURE((const StgClosure *)(p)))->header.info);
......
......@@ -248,7 +248,7 @@ typedef struct _GCStats {
StgDouble wall_seconds;
} GCStats;
void getGCStats (GCStats *s);
rtsBool getGCStatsEnabled (void);
bool getGCStatsEnabled (void);
// These don't change over execution, so do them elsewhere
// StgDouble init_cpu_seconds;
......@@ -288,7 +288,7 @@ void dirty_MUT_VAR(StgRegTable *reg, StgClosure *p);
/* set to disable CAF garbage collection in GHCi. */
/* (needed when dynamic libraries are used). */
extern rtsBool keepCAFs;
extern bool keepCAFs;
INLINE_HEADER void initBdescr(bdescr *bd, generation *gen, generation *dest)
{
......
......@@ -10,15 +10,15 @@
#define AWAITEVENT_H
#if !defined(THREADED_RTS)
/* awaitEvent(rtsBool wait)
/* awaitEvent(bool wait)
*
* Checks for blocked threads that need to be woken.
*
* Called from STG : NO
* Locks assumed : sched_mutex
*/
RTS_PRIVATE void awaitEvent(rtsBool wait); /* In posix/Select.c or
* win32/AwaitEvent.c */
RTS_PRIVATE void awaitEvent(bool wait); /* In posix/Select.c or
* win32/AwaitEvent.c */
#endif
#endif /* AWAITEVENT_H */
......@@ -82,7 +82,7 @@ Capability * rts_unsafeGetMyCapability (void)
}
#if defined(THREADED_RTS)
STATIC_INLINE rtsBool
STATIC_INLINE bool
globalWorkToDo (void)
{
return sched_state >= SCHED_INTERRUPTING
......@@ -96,7 +96,7 @@ findSpark (Capability *cap)
{
Capability *robbed;
StgClosurePtr spark;
rtsBool retry;
bool retry;
uint32_t i = 0;
if (!emptyRunQueue(cap) || cap->n_returning_tasks != 0) {
......@@ -107,7 +107,7 @@ findSpark (Capability *cap)
}
do {
retry = rtsFalse;
retry = false;
// first try to get a spark from our own pool.
// We should be using reclaimSpark(), because it works without
......@@ -130,7 +130,7 @@ findSpark (Capability *cap)
return spark;
}
if (!emptySparkPoolCap(cap)) {
retry = rtsTrue;
retry = true;
}
if (n_capabilities == 1) { return NULL; } // makes no sense...
......@@ -158,7 +158,7 @@ findSpark (Capability *cap)
if (spark == NULL && !emptySparkPoolCap(robbed)) {
// we conflicted with another thread while trying to steal;
// try again later.
retry = rtsTrue;
retry = true;
}
if (spark != NULL) {
......@@ -179,17 +179,17 @@ findSpark (Capability *cap)
// The result is only valid for an instant, of course, so in a sense
// is immediately invalid, and should not be relied upon for
// correctness.
rtsBool
bool
anySparks (void)
{
uint32_t i;
for (i=0; i < n_capabilities; i++) {
if (!emptySparkPoolCap(capabilities[i])) {
return rtsTrue;
return true;
}
}
return rtsFalse;
return false;
}
#endif
......@@ -247,9 +247,9 @@ initCapability (Capability *cap, uint32_t i)
cap->no = i;
cap->node = capNoToNumaNode(i);
cap->in_haskell = rtsFalse;
cap->in_haskell = false;
cap->idle = 0;
cap->disabled = rtsFalse;
cap->disabled = false;
cap->run_queue_hd = END_TSO_QUEUE;
cap->run_queue_tl = END_TSO_QUEUE;
......@@ -482,8 +482,8 @@ giveCapabilityToTask (Capability *cap USED_IF_DEBUG, Task *task)
cap->no, task->incall->tso ? "bound task" : "worker",
serialisableTaskId(task));
ACQUIRE_LOCK(&task->lock);
if (task->wakeup == rtsFalse) {
task->wakeup = rtsTrue;
if (task->wakeup == false) {
task->wakeup = true;
// the wakeup flag is needed because signalCondition() doesn't
// flag the condition if the thread is already runniing, but we want
// it to be sticky.
......@@ -503,7 +503,7 @@ giveCapabilityToTask (Capability *cap USED_IF_DEBUG, Task *task)
#if defined(THREADED_RTS)
void
releaseCapability_ (Capability* cap,
rtsBool always_wakeup)
bool always_wakeup)
{
Task *task;
......@@ -586,7 +586,7 @@ void
releaseCapability (Capability* cap USED_IF_THREADS)
{
ACQUIRE_LOCK(&cap->lock);
releaseCapability_(cap, rtsFalse);
releaseCapability_(cap, false);
RELEASE_LOCK(&cap->lock);
}
......@@ -594,7 +594,7 @@ void
releaseAndWakeupCapability (Capability* cap USED_IF_THREADS)
{
ACQUIRE_LOCK(&cap->lock);
releaseCapability_(cap, rtsTrue);
releaseCapability_(cap, true);
RELEASE_LOCK(&cap->lock);
}
......@@ -620,7 +620,7 @@ enqueueWorker (Capability* cap USED_IF_THREADS)
{
debugTrace(DEBUG_sched, "%d spare workers already, exiting",
cap->n_spare_workers);
releaseCapability_(cap,rtsFalse);
releaseCapability_(cap,false);
// hold the lock until after workerTaskStop; c.f. scheduleWorker()
workerTaskStop(task);
RELEASE_LOCK(&cap->lock);
......@@ -648,7 +648,7 @@ static Capability * waitForWorkerCapability (Task *task)
// task->lock held, cap->lock not held
if (!task->wakeup) waitCondition(&task->cond, &task->lock);
cap = task->cap;
task->wakeup = rtsFalse;
task->wakeup = false;
RELEASE_LOCK(&task->lock);
debugTrace(DEBUG_sched, "woken up on capability %d", cap->no);
......@@ -713,7 +713,7 @@ static Capability * waitForReturnCapability (Task *task)
// task->lock held, cap->lock not held
if (!task->wakeup) waitCondition(&task->cond, &task->lock);
cap = task->cap;
task->wakeup = rtsFalse;
task->wakeup = false;
RELEASE_LOCK(&task->lock);
// now check whether we should wake up...
......@@ -843,9 +843,9 @@ void waitForCapability (Capability **pCap, Task *task)
#if defined (THREADED_RTS)
/* See Note [GC livelock] in Schedule.c for why we have gcAllowed
and return the rtsBool */
rtsBool /* Did we GC? */
yieldCapability (Capability** pCap, Task *task, rtsBool gcAllowed)
and return the bool */
bool /* Did we GC? */
yieldCapability (Capability** pCap, Task *task, bool gcAllowed)
{
Capability *cap = *pCap;
......@@ -861,7 +861,7 @@ yieldCapability (Capability** pCap, Task *task, rtsBool gcAllowed)
traceSparkCounters(cap);
// See Note [migrated bound threads 2]
if (task->cap == cap) {
return rtsTrue;
return true;
}
}
}
......@@ -870,7 +870,7 @@ yieldCapability (Capability** pCap, Task *task, rtsBool gcAllowed)
debugTrace(DEBUG_sched, "giving up capability %d", cap->no);
// We must now release the capability and wait to be woken up again.
task->wakeup = rtsFalse;
task->wakeup = false;
ACQUIRE_LOCK(&cap->lock);
......@@ -879,7 +879,7 @@ yieldCapability (Capability** pCap, Task *task, rtsBool gcAllowed)
enqueueWorker(cap);
}
releaseCapability_(cap, rtsFalse);
releaseCapability_(cap, false);
if (isWorker(task) || isBoundTask(task)) {
RELEASE_LOCK(&cap->lock);
......@@ -906,7 +906,7 @@ yieldCapability (Capability** pCap, Task *task, rtsBool gcAllowed)
ASSERT_FULL_CAPABILITY_INVARIANTS(cap,task);
return rtsFalse;
return false;
}
#endif /* THREADED_RTS */
......@@ -954,7 +954,7 @@ prodCapability (Capability *cap, Task *task)
ACQUIRE_LOCK(&cap->lock);
if (!cap->running_task) {
cap->running_task = task;
releaseCapability_(cap,rtsTrue);
releaseCapability_(cap,true);
}
RELEASE_LOCK(&cap->lock);
}
......@@ -970,21 +970,21 @@ prodCapability (Capability *cap, Task *task)
#if defined (THREADED_RTS)
rtsBool
bool
tryGrabCapability (Capability *cap, Task *task)
{
int r;
if (cap->running_task != NULL) return rtsFalse;
if (cap->running_task != NULL) return false;
r = TRY_ACQUIRE_LOCK(&cap->lock);
if (r != 0) return rtsFalse;
if (r != 0) return false;
if (cap->running_task != NULL) {
RELEASE_LOCK(&cap->lock);
return rtsFalse;
return false;
}
task->cap = cap;
cap->running_task = task;
RELEASE_LOCK(&cap->lock);
return rtsTrue;
return true;
}
......@@ -1008,7 +1008,7 @@ tryGrabCapability (Capability *cap, Task *task)
static void
shutdownCapability (Capability *cap USED_IF_THREADS,
Task *task USED_IF_THREADS,
rtsBool safe USED_IF_THREADS)
bool safe USED_IF_THREADS)
{
#if defined(THREADED_RTS)
uint32_t i;
......@@ -1062,7 +1062,7 @@ shutdownCapability (Capability *cap USED_IF_THREADS,
if (!emptyRunQueue(cap) || cap->spare_workers) {
debugTrace(DEBUG_sched,
"runnable threads or workers still alive, yielding");
releaseCapability_(cap,rtsFalse); // this will wake up a worker
releaseCapability_(cap,false); // this will wake up a worker
RELEASE_LOCK(&cap->lock);
yieldThread();
continue;
......@@ -1106,7 +1106,7 @@ shutdownCapability (Capability *cap USED_IF_THREADS,
}
void
shutdownCapabilities(Task *task, rtsBool safe)
shutdownCapabilities(Task *task, bool safe)
{
uint32_t i;
for (i=0; i < n_capabilities; i++) {
......@@ -1157,7 +1157,7 @@ freeCapabilities (void)
void
markCapability (evac_fn evac, void *user, Capability *cap,
rtsBool no_mark_sparks USED_IF_THREADS)
bool no_mark_sparks USED_IF_THREADS)
{
InCall *incall;
......@@ -1191,12 +1191,12 @@ markCapabilities (evac_fn evac, void *user)
{
uint32_t n;
for (n = 0; n < n_capabilities; n++) {
markCapability(evac, user, capabilities[n], rtsFalse);
markCapability(evac, user, capabilities[n], false);
}
}
#if defined(THREADED_RTS)
rtsBool checkSparkCountInvariant (void)
bool checkSparkCountInvariant (void)
{
SparkCounters sparks = { 0, 0, 0, 0, 0, 0 };
StgWord64 remaining = 0;
......
......@@ -53,12 +53,12 @@ struct Capability_ {
// true if this Capability is running Haskell code, used for
// catching unsafe call-ins.
rtsBool in_haskell;
bool in_haskell;
// Has there been any activity on this Capability since the last GC?
uint32_t idle;
rtsBool disabled;
bool disabled;
// The run queue. The Task owning this Capability has exclusive
// access to its run queue, so can wake up threads without
......@@ -204,7 +204,7 @@ struct Capability_ {
ASSERT_TASK_ID(task);
#if defined(THREADED_RTS)
rtsBool checkSparkCountInvariant (void);
bool checkSparkCountInvariant (void);
#endif
// Converts a *StgRegTable into a *Capability.
......@@ -232,14 +232,14 @@ void moreCapabilities (uint32_t from, uint32_t to);
#if defined(THREADED_RTS)
void releaseCapability (Capability* cap);
void releaseAndWakeupCapability (Capability* cap);
void releaseCapability_ (Capability* cap, rtsBool always_wakeup);
void releaseCapability_ (Capability* cap, bool always_wakeup);
// assumes cap->lock is held
#else
// releaseCapability() is empty in non-threaded RTS
INLINE_HEADER void releaseCapability (Capability* cap STG_UNUSED) {};
I