Project

General

Profile

Bug #1184

Affinity setting failed bug

Added by Reid Van Lehn over 6 years ago. Updated over 5 years ago.

Status:
Closed
Priority:
High
Assignee:
-
Category:
mdrun
Target version:
Affected version - extra info:
Affected version:
Difficulty:
uncategorized
Close

Description

I ran into a bug I do not understand today upon upgrading from v. 4.5.5 to v 4.6. I'm using older 8 core Intel Xeon E5430 machines, and when I submitted a job for 8 cores to one of the nodes I received the following error:

NOTE: In thread-MPI thread #3: Affinity setting failed.
This can cause performance degradation!

NOTE: In thread-MPI thread #2: Affinity setting failed.
This can cause performance degradation!

NOTE: In thread-MPI thread #1: Affinity setting failed.
This can cause performance degradation!

I ran mdrun simply with the flags:

mdrun -v -ntmpi 8 -deffnm em

Using the top command, I confirmed that no other programs were running and that mdrun was in fact only using 5 cores. Turning pinning off explicitly with -pin off (rather than -pin auto) did correctly give me the all 8 cores again.

Operating system: CentOS 5.4, replicated on 5.9 as well

Output of mdrun -version:

Gromacs version: VERSION 4.6
Precision: single
Memory model: 64 bit
MPI library: thread_mpi
OpenMP support: enabled
GPU support: disabled
invsqrt routine: gmx_software_invsqrt(x)
CPU acceleration: SSE4.1
FFT library: fftw-3.3.2-sse2
Large file support: enabled
RDTSCP usage: disabled
Built on: Sat Mar 2 15:33:07 PST 2013
Build OS/arch: Linux 2.6.18-164.el5 x86_64
Build CPU vendor: GenuineIntel
Build CPU brand: Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
Build CPU family: 6 Model: 23 Stepping: 10
Build CPU features: apic clfsh cmov cx8 cx16 lahf_lm mmx msr pdcm pse sse2 sse3 sse4.1 ssse3
C compiler: /opt/intel/composer_xe_2011_sp1.10.319/bin/intel64/icc Intel icc (ICC) 12.1.4 20120410
C compiler flags: -msse4.1 -std=gnu99 -Wall -ip -funroll-all-loops -O3 -DNDEBUG


Related issues

Related to GROMACS - Task #1419: make thread affinity setting more robustRejected

History

#1 Updated by Szilárd Páll over 6 years ago

The man sched_setaffinity: states:

The CPU affinity system calls were introduced in Linux kernel 2.5.8. The system call wrappers were
 introduced in glibc 2.3. Initially, the glibc interfaces included a cpusetsize argument, typed as
 unsigned int. In glibc 2.3.3, the cpusetsize argument was removed, but was then restored in glibc
 2.3.4, with type size_t.

Although your 2.6.18 kernel should have the affinity interfaces, it is so old that I suspect that it could very well be causing issues. Alternatively, it could also be glibc. What version are you using?

Could you just try to install a new kernel and/or glibc and see whether the problem persists?

#2 Updated by Oliver Beckstein over 6 years ago

I experienced a seemingly similar problem on the same hardware but a modern OS and Linux kernel:

  • Gromacs: 4.6.1 (compiled with Intel 13.0 and SSE2 only, details below)
  • CPUSs: dual Intel E5420 @ 2.50GHz (+ NVIDIA GTX-680)
  • distribution: Ubuntu 12.04.2
  • kernel: Linux 3.2.0-38-generic #61-Ubuntu SMP Tue Feb 19 12:18:21 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

mdrun only appears to run on 5 cpus (judging from 500% cpu usage in top) even though I use 'mdrun -nt 8'. The log file contains the warning:

Using 1 MPI thread
Using 8 OpenMP threads

...

Pinning threads with a logical core stride of 1

NOTE: In thread-MPI thread #0: Affinity setting of 3/8 threads failed.
      This can cause performance degradation!

Using 'mdrun -pin off' helps to recover full performance (for a small ~35k atom system 22 ns/d instead of 3 ns/d, using the GPU).

Details of the Gromacs binary (from log):

Log file opened on Fri Mar  8 09:13:21 2013
Host: darthtater  pid: 32252  nodeid: 0  nnodes:  1
Gromacs version:    VERSION 4.6.1
Precision:          single
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     enabled
GPU support:        enabled
invsqrt routine:    gmx_software_invsqrt(x)
CPU acceleration:   SSE2
FFT library:        fftw-3.3.2-sse2
Large file support: enabled
RDTSCP usage:       disabled
Built on:           Wed Mar  6 13:13:45 MST 2013
Built by:           oliver@darthtater [CMAKE]
Build OS/arch:      Linux 3.2.0-38-generic x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:    Intel(R) Xeon(R) CPU           E5420  @ 2.50GHz
Build CPU family:   6   Model: 23   Stepping: 10
Build CPU features: apic clfsh cmov cx8 lahf_lm mmx msr pse sse2 sse3 ssse3
C compiler:         /nfs/packages/opt/Linux_x86_64/intel/13.0/bin/icc Intel icc (ICC) 13.0.0 20120731
C compiler flags:   -msse2   -std=gnu99 -Wall   -ip -funroll-all-loops  -O3 -DNDEBUG
C++ compiler:       /nfs/packages/opt/Linux_x86_64/intel/13.0/bin/icpc Intel icpc (ICC) 13.0.0 20120731
C++ compiler flags: -msse2   -Wall   -ip -funroll-all-loops  -O3 -DNDEBUG
CUDA compiler:      nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2012 NVIDIA Corporation;Built on Fri_Sep_21_17:28:58_PDT_2012;Cuda compilation tools, release 5.0, V0.2.1221
CUDA driver:        5.0
CUDA runtime:       5.0

#3 Updated by Szilárd Páll over 6 years ago

Damn, I'm starting to see a pattern. This might be caused by this stupid Intel OpenMP library which annoyingly enough tries to set some affinities but in a weird way because our mdrun internal detection does not notice it.

Could you both try to set the env var KMP_AFFINITY=disabled?

#4 Updated by Oliver Beckstein over 6 years ago

Szilárd Páll wrote:

Could you both try to set the env var KMP_AFFINITY=disabled?

I tried the same as above (mdrun -nt 8) and it did not make any difference, I still get In thread-MPI thread #0: Affinity setting of 3/8 threads failed.

When I tried mdrun -ntmpi 8 -nb cpu (also with export KMP_AFFINITY=disabled) I do not get the warning message but still only get a cpu load of 500% (instead of 800%).

#5 Updated by Oliver Beckstein over 6 years ago

I also run mdrun -nt 8 -nb cpu with 4.6.1 compiled with GCC (see below for compilation details) and no setting of KMP_AFFINITY. The result is the same: The total load is <800% and the log file prints the warning:

Using 1 MPI thread
Using 8 OpenMP threads 
...
Pinning threads with a logical core stride of 1

NOTE: In thread-MPI thread #0: Affinity setting of 3/8 threads failed.
      This can cause performance degradation!

Details of Gromacs:

Gromacs version:    VERSION 4.6.1
Precision:          single
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     enabled
GPU support:        enabled
invsqrt routine:    gmx_software_invsqrt(x)
CPU acceleration:   SSE2
FFT library:        fftw-3.3.2-sse2
Large file support: enabled
RDTSCP usage:       disabled
Built on:           Fri Mar  8 13:24:19 MST 2013
Built by:           oliver@darthtater [CMAKE]
Build OS/arch:      Linux 3.2.0-38-generic x86_64
Build CPU vendor:   GenuineIntel
Build CPU brand:    Intel(R) Xeon(R) CPU           E5420  @ 2.50GHz
Build CPU family:   6   Model: 23   Stepping: 10
Build CPU features: apic clfsh cmov cx8 lahf_lm mmx msr pse sse2 sse3 ssse3
C compiler:         /usr/bin/gcc GNU gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
C compiler flags:   -msse2   -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value   -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3 -DNDEBUG
C++ compiler:       /usr/bin/c++ GNU c++ (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
C++ compiler flags: -msse2   -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value   -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3 -DNDEBUG
CUDA compiler:      nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2012 NVIDIA Corporation;Built on Fri_Sep_21_17:28:58_PDT_2012;Cuda compilation tools, release 5.0, V0.2.1221
CUDA driver:        5.0
CUDA runtime:       5.0

#6 Updated by Roland Schulz over 6 years ago

Could you run with "mdrun -debug 1" (feel free to add any other mdrun arguments you used before, it is sufficient to run for 0 steps)? And then post the output of "grep affinity mdrun.debug"?

#7 Updated by Reid Van Lehn over 6 years ago

A few updates from me:

Using same hardware/software info as in my initial post (i.e. I did not update my Kernel), I ran first with KMP_AFFINITY=disabled, and observed identical behavior - 3 "thread-MPI Affinity setting failed" errors and 500% CPU utilization. It's also interesting to note that despite using 500% instead of 800% with pinning off, the actual performance is only 1/4 of the full 8 core performance. Greepping for "affinity" per Roland's suggestion showed:

Default affinity mask found
Default affinity mask found
Default affinity mask found
Default affinity mask found
Default affinity mask found
Default affinity mask found
Default affinity mask found
Default affinity mask found
Default affinity mask found
On rank 3, thread 0, core 62 the affinity setting returned 22
On rank 2, thread 0, core 1641368296 the affinity setting returned 22
On rank 1, thread 0, core 62 the affinity setting returned 22
On rank 0, thread 0, core 7 the affinity setting returned 0
On rank 7, thread 0, core 0 the affinity setting returned 0
On rank 6, thread 0, core 0 the affinity setting returned 0
On rank 5, thread 0, core 0 the affinity setting returned 0
On rank 4, thread 0, core 0 the affinity setting returned 0

The 3 threads which returned non-zero affinity settings are the 3 that triggered errors in the mdrun output.

Next, I simply ran "export KMP_AFFINITY=" to try to reset this flag to empty. I then ran and got the message:
"NOTE: KMP_AFFINITY set, will turn off mdrun internal affinity
setting as the two can conflict and cause performance degradation.
To keep using the mdrun internal affinity setting, set the
KMP_AFFINITY=disabled environment variable."

I did not get the threadMPI affinity setting failed messages, and instead got normal performance (800% on top, correct # of ns/day). Grepping for "affinity" in mdrun.debug showed nothing.

Best,
- Reid

#8 Updated by Roland Schulz over 6 years ago

Is this with hyperthreading active or disabled?
Could you add line

printf("%d: (%d*%d+%d)*%d+%d=%d", i, cpuid->package_id[i], cpuid->ncores_per_package, cpuid->core_id[i], cpuid->nhwthreads_per_core, cpuid->hwthread_id[i], idx)

after line 442 in file src/gmxlib/gmx_cpuid.c (idx=...) and post the output?

#9 Updated by Reid Van Lehn over 6 years ago

Hi Roland,

I think these processors lack hyper-threading (http://ark.intel.com/products/33081/Intel-Xeon-Processor-E5430-12M-Cache-2_66-GHz-1333-MHz-FSB) or at least I don't know how to turn it off/on if they do.

Here is the output of the line you suggested:

0: (0*1+0)*1+0=0

1: (0*1+0)*1+0=0

2: (0*1+0)*1+0=0

3: (0*1+0)*1+0=0

4: (0*1+0)*1+0=0

5: (0*1+0)*1+0=0

6: (0*1+0)*1+0=0

Using 8 MPI threads

NOTE: In thread-MPI thread #3: Affinity setting failed.
This can cause performance degradation!

NOTE: In thread-MPI thread #1: Affinity setting failed.
This can cause performance degradation!

NOTE: In thread-MPI thread #2: Affinity setting failed.
This can cause performance degradation!

#10 Updated by Reid Van Lehn over 6 years ago

Sorry and just to be clear the output of the printf statements was the same independent of whether pinning was on (as above) or off (no errors).

#11 Updated by Szilárd Páll over 6 years ago

Reid Van Lehn wrote:

Using same hardware/software info as in my initial post (i.e. I did not update my Kernel), I ran first with KMP_AFFINITY=disabled, and observed identical behavior - 3 "thread-MPI Affinity setting failed" errors and 500% CPU utilization. It's also interesting to note that despite using 500% instead of 800% with pinning off, the actual performance is only 1/4 of the full 8 core performance.

That's because when multiple threads get locket to the same core, their performance degrades causing the slowdown of the entire simulation because the non-overlapping threads have to wait for all others at the end of every MD iteration.

On rank 3, thread 0, core 62 the affinity setting returned 22
On rank 2, thread 0, core 1641368296 the affinity setting returned 22
On rank 1, thread 0, core 62 the affinity setting returned 22
On rank 0, thread 0, core 7 the affinity setting returned 0
On rank 7, thread 0, core 0 the affinity setting returned 0
On rank 6, thread 0, core 0 the affinity setting returned 0
On rank 5, thread 0, core 0 the affinity setting returned 0
On rank 4, thread 0, core 0 the affinity setting returned 0

Those core numbers look fishy, but looking at the code I don't immediately see what causes them to not be equal to the tMPI rank/ Roland, do you have any idea?

Next, I simply ran "export KMP_AFFINITY=" to try to reset this flag to empty. I then ran and got the message:
"NOTE: KMP_AFFINITY set, will turn off mdrun internal affinity
setting as the two can conflict and cause performance degradation.
To keep using the mdrun internal affinity setting, set the
KMP_AFFINITY=disabled environment variable."

That's not correct. If you want to remove an environment variable you have exported, you should use unset ENV_VAR. KMP_AFFINITY set to an empty string is not a valid input (to the Intel OpenMP library). Valid inputs are defined here.

I did not get the threadMPI affinity setting failed messages, and instead got normal performance (800% on top, correct # of ns/day). Grepping for "affinity" in mdrun.debug showed nothing.

Well, that's because, as the above message states, mdrun detected that you set KMP_AFFINITY and it backs off to avoid overriding the affinity settings defined on the command line.

#12 Updated by Szilárd Páll over 6 years ago

Roland Schulz wrote:

after line 442 in file src/gmxlib/gmx_cpuid.c (idx=...) and post the output?

I guess you meant line 495 (just in case if someone else wants to try this debug print).

#13 Updated by Roland Schulz over 6 years ago

Yes. Sorry I mean line 495. I accidentally only counted non blank lines.

#14 Updated by Szilárd Páll over 6 years ago

As I suspected that multiple fields of the cpuid structure are incorrect, so I tried the code on a dual-socket Intel Westmere machine (2x X5650 HT on). Here's the output:
0: (0*6+0)*2+0=0
1: (0*6+1)*2+0=2
2: (0*6+2)*2+0=4
3: (0*6+3)*2+0=6
4: (0*6+4)*2+0=8
5: (0*6+5)*2+0=10
6: (1*6+0)*2+0=12
7: (1*6+1)*2+0=14
8: (1*6+2)*2+0=16
9: (1*6+3)*2+0=18
10: (1*6+4)*2+0=20
11: (1*6+5)*2+0=22
12: (0*6+0)*2+1=1
13: (0*6+1)*2+1=3
14: (0*6+2)*2+1=5
15: (0*6+3)*2+1=7
16: (0*6+4)*2+1=9
17: (0*6+5)*2+1=11
18: (1*6+0)*2+1=13
19: (1*6+1)*2+1=15
20: (1*6+2)*2+1=17
21: (1*6+3)*2+1=19
22: (1*6+4)*2+1=21
23: (1*6+5)*2+1=23

Comparing Reid's debug output in #9 with the above, beside the difference in HT (that is nhwthreads_per_core=2 iso 1), at least the following fields are incorrect:
  • package_id (should be 1 for i>=4);
  • ncores_per_package (should be 4);
  • coreid (should be i % 4).

#15 Updated by Szilárd Páll over 6 years ago

  • Category set to mdrun
  • Priority changed from Normal to High
  • Target version set to 4.6.2

Bumped priority as it looks like this bug will cause a severe slowdown on Intel Harpertown (even with new kernel, Oliver is using 3.2). Unless I'm missing something this should be a bug in the cpuid detection.

One related thing to add: we could add hard-coded software to hardware thread mapping layouts and allow the user to select them via an environment variable (in case if the cpuid detection fails or the platform is not supported). By simply adding the few possible Intel x86 layouts (interleaved and sequential with "real cores" first and "HT cores" last) we could simply suggest a manual workaround in case of incorrectly detected cpuid information. Without such an option the user is stuck with either completely disabling affinity setting or setting the affinity manually.

#16 Updated by Erik Lindahl over 6 years ago

Although I understand Reid's problem, I would keep it as normal priority since these processors were released five years ago, after all, and it does not affect scientific results.

I'm not sure about Harpertown, but in general the CPUID instruction is a complete mess for older architectures, so unless somebody has time to dig into the old architectures an easier solution could simply be to disable CPU pinning if thread affinity locking fails.

#17 Updated by Erik Lindahl over 6 years ago

PS: After a quick googling, I think these processors simply don't support X2APIC enumeration, but for some reason the still support the 0xB CPUID function (which only does X2APIC).

You could try checking for this by changing the line " if (max_stdfn >= 0xB)" (around line 641) to
"if (max_stdfn >= 0xB && cpuid->feature[GMX_CPUID_FEATURE_X86_X2APIC]".

If my hunch is right, the routine will the simply return that we cannot get X2APIC on your processor, and then you'll unfortunately have to live without pinning (but that is much better than incorrect pinning).

#18 Updated by Roland Schulz over 6 years ago

If we assume no one is using Pentium4 CPUs anymore (signature < 0x006F0) and that the CPUID detection works correctly with >=Nahelem (>=0x106A0) we can simply check if the model is older than Nahelem and simply use the default layout without HT (none of those have HT). If we don't want to ignore <0x006F0 (even though it is really old) we could either warn or disable affinity. Does that sound good as a solution?

I used http://software.intel.com/en-us/articles/intel-architecture-and-processor-identification-with-cpuid-model-and-family-numbers for the signature numbers.

#19 Updated by Erik Lindahl over 6 years ago

The cpuid code can only do X2APIC-based detection on intel, so if that is not present CPU topology information is simply not supported. I think my suggested change above will handle the feature/bug that the CPU still claims to support CPUID function 0xB. We cannot hardcode hacks for specific old processors since that would still be a guess - we don't know what the CPU topology is in the absence of X2APIC, so that is simply the only thing we can return from gmx_cpuid. In the higher-level routine I would simply disable pinning if cpu topology is not present.

#20 Updated by Roland Schulz over 6 years ago

I agree we should disable pinning if anything goes wrong. Not only if X2APIC is unavailable but also if setting the affinity fails for any other reason on some of the cores.

It isn't a guess that for 0x006F0<signature<0x106A0 the CPU doesn't have HT. So I think we can also do this small extra step, given that these CPUs are still around, the fix is so easy, and we got several users asking about it.

#21 Updated by Roland Schulz over 6 years ago

I just checked with E5450 and there pinning seems to work fine. So it seems to be a problem with only some Harpertown CPUs. Given that we cannot check for oversubscription without topology it is indeed better to not pin if we don't have topology with the default "-pin auto". But "-pin on" should pin. (It is bug #1122 that "-pin on" doesn't always work).

#22 Updated by Szilárd Páll over 6 years ago

Erik Lindahl wrote:

Although I understand Reid's problem, I would keep it as normal priority since these processors were released five years ago, after all, and it does not affect scientific results.

Erik, I increased the priority because this a IMHO rather severe case of the topology detection being too optimistic and not realizing the it is returning bogos information. This will affect quite a large number of users, probably both on desktop and server CPUs. Also, I tend to thing that 5-6 years old CPUs are not so outdated - even PDC@KTH had such machines just a few months ago, let alone the numerous users getting GROMACS packaged through Linux distributions.

I'm not sure about Harpertown, but in general the CPUID instruction is a complete mess for older architectures, so unless somebody has time to dig into the old architectures an easier solution could simply be to disable CPU pinning if thread affinity locking fails.

That's may not be so simple. If we first try to set affinity but fail, we will already have it set for some threads and I'm not sure we can reliably undo it. That's why I think we should strive for a rock solid cpuid-based topology detection, otherwise we can run into issues with our code trying to be overly smart and failing at it.

#23 Updated by Erik Lindahl over 6 years ago

I'm not entirely sure what we are arguing about. I wouldn't call it "overly optimistic" that we use exactly the same check as Intel proposes in their reference code example (i.e., function 0xB being supported). Second, if we want topology information for processors not supporting X2APIC, somebody will have to implement that!

#24 Updated by Erik Lindahl over 6 years ago

PS: Do we have any concrete example where it works to pin a thread, but not unpinning it?

#25 Updated by Szilárd Páll over 6 years ago

Erik Lindahl wrote:

I'm not entirely sure what we are arguing about. I wouldn't call it "overly optimistic" that we use exactly the same check as Intel proposes in their reference code example (i.e., function 0xB being supported). Second, if we want topology information for processors not supporting X2APIC, somebody will have to implement that!

I don't mean to argue. The only statement I stand for is that regardless of whether the bug is in GROMACS, in Intel CPUs or documentation, an advanced feature which is fragile and error-prone will hurt users (in the supplied bug reports it causes 4-5x slowdown). Therefore, in my opinion, it would be useful to have a simplistic fallback/workaround (described above) which will inherently work correctly as long as the right layout is chosen.

#26 Updated by Berk Hess over 6 years ago

After discussing with Szilard I would suggest:
  • add cpuid sanity checks: e.g. thread indices in range 0...nhwthreads, no dupilicate indices, etc.
  • for Intel pre-Nehalem, don't detect layout, without HT: assume linear layout
  • add env.vars to select linear and linux Intel HT style layout
  • print a line with the layout in the log file

I can volunteer to do this if others don't have time.

#27 Updated by Erik Lindahl over 6 years ago

Sanity checks would be a good idea, but before jumping into coding it would be good if somebody could test the suggestion in comment 17 on an E5430, so we are not missing some other bug that is more subtle. If that works, the higher-level code should likely just assume linear layout when topology information is not supported (I don't see any specific reason to limit that case just to some Intel x86 processors?).

#28 Updated by Szilárd Páll over 6 years ago

Berk Hess wrote:

After discussing with Szilard I would suggest:
  • add cpuid sanity checks: e.g. thread indices in range 0...nhwthreads, no dupilicate indices, etc.

I assume (but correct me if I'm wrong) that the #logical cores and #physical cores count are reliable and in that case a sanity check on core_id-s compared to hwthread_id-s would also be possible.

#29 Updated by Szilárd Páll over 6 years ago

Erik Lindahl wrote:

PS: Do we have any concrete example where it works to pin a thread, but not unpinning it?

The same way the affinity setting fails when the thread mapping is buggy - and in the reported case(s) get_thread_affinity_layout returns a locality_order array with incorrect values in it (see note 7) -, it is fair to assume trying to clear the affinity settings will also fail. Am I missing something?

#30 Updated by Erik Lindahl over 6 years ago

The same way the affinity setting fails when the thread mapping is buggy - and in the reported case(s) get_thread_affinity_layout returns a locality_order array with incorrect values in it (see note 7) -, it is fair to assume trying to clear the affinity settings will also fail. Am I missing something?

But we won't have any problems on threads that failed to pin, since they simply never got pinned. By definition, we would only have to unpin threads where the affinity setting worked without problems?

#31 Updated by Szilárd Páll over 6 years ago

Erik Lindahl wrote:

But we won't have any problems on threads that failed to pin, since they simply never got pinned. By definition, we would only have to unpin threads where the affinity setting worked without problems?

You right. I've just had a closer look at Reid's above example in which I initially miscounted the number of used cores vs the reported CPU utilization. I thought is was 3 failed + 5 overlapping = 4 cores != 500%, but I just noticed that there are only 4 threads overlapping on core 0, one of the threads gets lucky with a valid number (7).

This seems to indicate that we could un-pin the threads if any of them failed to set affinity. Now the only question is where/how should this be implemented (not that ATM the thread-MPI API provides the affinity setting functionality).

#32 Updated by Mark Abraham over 6 years ago

  • Target version changed from 4.6.2 to 4.6.3
  • Affected version set to 4.6

#33 Updated by Mark Abraham about 6 years ago

  • Target version changed from 4.6.3 to 4.6.x

#34 Updated by Szilárd Páll over 5 years ago

  • Related to Task #1419: make thread affinity setting more robust added

#35 Updated by Szilárd Páll over 5 years ago

What issues are remaining for this to be closed?

FYI: I've opened a related issue (#1419).

#36 Updated by Szilárd Páll over 5 years ago

  • Status changed from New to Feedback wanted

#37 Updated by Rossen Apostolov over 5 years ago

  • Status changed from Feedback wanted to Resolved

so can we close it? there hasn't been more feedback. I'm marking it resolved for now.

#38 Updated by Rossen Apostolov over 5 years ago

  • Status changed from Resolved to Closed

closing this one.

Also available in: Atom PDF