天道酬勤,学无止境

C++ windows time

I have a problem in using time. I want to use and get microseconds on windows using C++.

I can't find the way.

标签

评论

The "canonical" answer was given by unwind :

One popular way is using the QueryPerformanceCounter() call.

There are however few problems with this method:

  1. it's intended for measurement of time intervals, not time. This means you have to write code to establish "epoch time" from which you will measure precise intervals. This is called calibration.
  2. As you calibrate your clock, you also need to periodically adjust it so it's never too much out of sync (this is called drift) with your system clock.
  3. QueryPerformanceCounter is not implemented in user space; this means context switch is needed to call kernel side of implementation, and that is relatively expensive (around 0.7 microsecond). This seems to be required to support legacy hardware.

Not all is lost, though. Points 1. and 2. are something you can do with a bit of coding, 3. can be replaced with direct call to RDTSC (available in newer versions of Visual C++ via __rdtsc() intrinsic), as long as you know accurate CPU clock frequency. Although, on older CPUs, such call would be susceptible to changes in cpu internal clock speed, in all newer Intel and AMD CPUs it is guaranteed to give fairly accurate results and won't be affected by changes in CPU clock (e.g. power saving features).

Lets get started with 1. Here is data structure to hold calibration data:

struct init
{
  long long stamp; // last adjustment time
  long long epoch; // last sync time as FILETIME
  long long start; // counter ticks to match epoch
  long long freq;  // counter frequency (ticks per 10ms)

  void sync(int sleep);
};

init                  data_[2] = {};
const init* volatile  init_ = &data_[0];

Here is code for initial calibration; it has to be given time (in milliseconds) to wait for the clock to move; I've found that 500 milliseconds give pretty good results (the shorter time, the less accurate calibration). For the purpose of callibration we are going to use QueryPerformanceCounter() etc. You only need to call it for data_[0], since data_[1] will be updated by periodic clock adjustment (below).

void init::sync(int sleep)
{
  LARGE_INTEGER t1, t2, p1, p2, r1, r2, f;
  int cpu[4] = {};

  // prepare for rdtsc calibration - affinity and priority
  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
  SetThreadAffinityMask(GetCurrentThread(), 2);
  Sleep(10);

  // frequency for time measurement during calibration
  QueryPerformanceFrequency(&f);

  // for explanation why RDTSC is safe on modern CPUs, look for "Constant TSC" and "Invariant TSC" in
  // Intel(R) 64 and IA-32 Architectures Software Developer’s Manual (document 253668.pdf)

  __cpuid(cpu, 0); // flush CPU pipeline
  r1.QuadPart = __rdtsc();
  __cpuid(cpu, 0);
  QueryPerformanceCounter(&p1);

  // sleep some time, doesn't matter it's not accurate.
  Sleep(sleep);

  // wait for the system clock to move, so we have exact epoch
  GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
  do
  {
    Sleep(0);
    GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
    __cpuid(cpu, 0); // flush CPU pipeline
    r2.QuadPart = __rdtsc();
  } while(t2.QuadPart == t1.QuadPart);

  // measure how much time has passed exactly, using more expensive QPC
  __cpuid(cpu, 0);
  QueryPerformanceCounter(&p2);

  stamp = t2.QuadPart;
  epoch = t2.QuadPart;
  start = r2.QuadPart;

  // calculate counter ticks per 10ms
  freq = f.QuadPart * (r2.QuadPart-r1.QuadPart) / 100 / (p2.QuadPart-p1.QuadPart);

  SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_NORMAL);
  SetThreadAffinityMask(GetCurrentThread(), 0xFF);
}

With good calibration data you can calculate exact time from cheap RDTSC (I measured the call and calculation to be ~25 nanoseconds on my machine). There are three things to note:

  1. return type is binary compatible with FILETIME structure and is precise to 100ns , unlike GetSystemTimeAsFileTime (which increments in 10-30ms or so intervals, or 1 millisecond at best).

  2. in order to avoid expensive conversions integer to double to integer, the whole calculation is performed in 64 bit integers. Even though these can hold huge numbers, there is real risk of integer overflow, and so start must be brought forward periodically to avoid it. This is done in clock adjustment.

  3. we are making a copy of calibration data, because it might have been updated during our call by clock adjustement in another thread.

Here is the code to read current time with high precision. Return value is binary compatible with FILETIME, i.e. number of 100-nanosecond intervals since Jan 1, 1601.

long long now()
{
  // must make a copy
  const init* it = init_;
  // __cpuid(cpu, 0) - no need to flush CPU pipeline here
  const long long p = __rdtsc();
  // time passed from epoch in counter ticks
  long long d = (p - it->start);
  if (d > 0x80000000000ll)
  {
    // closing to integer overflow, must adjust now
    adjust();
  }
  // convert 10ms to 100ns periods
  d *= 100000ll;
  d /= it->freq;
  // and add to epoch, so we have proper FILETIME
  d += it->epoch;
  return d;
}

For clock adjustment, we need to capture exact time (as provided by system clock) and compare it against our clock; this will give us drift value. Next we use simple formula to calculate "adjusted" CPU frequency, to make our clock meet system clock at the time of next adjustment. Thus it is important that adjustments are called on regular intervals; I've found that it works well when called in 15 minutes intervals. I use CreateTimerQueueTimer, called once at program startup to schedule adjustment calls (not demonstrated here).

The slight problem with capturing accurate system time (for the purpose of calculating drift) is that we need to wait for the system clock to move, and that can take up to 30 milliseconds or so (it's a long time). If adjustment is not performed, it would risk integer overflow inside function now(), not to mention uncorrected drift from system clock. There is builtin protection against overflow in now(), but we really don't want to trigger it synchronously in a thread which happened to call now() at the wrong moment.

Here is the code for periodic clock adjustment, clock drift is in r->epoch - r->stamp:

void adjust()
{
  // must make a copy
  const init* it = init_;
  init* r = (init_ == &data_[0] ? &data_[1] : &data_[0]);
  LARGE_INTEGER t1, t2;

  // wait for the system clock to move, so we have exact time to compare against
  GetSystemTimeAsFileTime((FILETIME*) (&t1.u));
  long long p = 0;
  int cpu[4] = {};
  do
  {
    Sleep(0);
    GetSystemTimeAsFileTime((FILETIME*) (&t2.u));
    __cpuid(cpu, 0); // flush CPU pipeline
    p = __rdtsc();
  } while (t2.QuadPart == t1.QuadPart);

  long long d = (p - it->start);
  // convert 10ms to 100ns periods
  d *= 100000ll;
  d /= it->freq;

  r->start = p;
  r->epoch = d + it->epoch;
  r->stamp = t2.QuadPart;

  const long long dt1 = t2.QuadPart - it->epoch;
  const long long dt2 = t2.QuadPart - it->stamp;
  const double s1 = (double) d / dt1;
  const double s2 = (double) d / dt2;

  r->freq = (long long) (it->freq * (s1 + s2 - 1) + 0.5);

  InterlockedExchangePointer((volatile PVOID*) &init_, r);

  // if you have log output, here is good point to log calibration results
}

Lastly two utility functions. One will convert FILETIME (including output from now()) to SYSTEMTIME while preserving microseconds to separate int. Other will return frequency, so your program can use __rdtsc() directly for accurate measurements of time intervals (with nanosecond precision).

void convert(SYSTEMTIME& s, int &us, long long f)
{
  LARGE_INTEGER i;
  i.QuadPart = f;
  FileTimeToSystemTime((FILETIME*) (&i.u), &s);
  s.wMilliseconds = 0;
  LARGE_INTEGER t;
  SystemTimeToFileTime(&s, (FILETIME*) (&t.u));
  us = (int) (i.QuadPart - t.QuadPart)/10;
}

long long frequency()
{
  // must make a copy
  const init* it = init_;
  return it->freq * 100;
}

Well of course none of the above is more accurate than your system clock, which is unlikely to be more accurate than few hundred milliseconds. The purpose of precise clock (as opposed to accurate) as implemented above, is to provide single measure which can be used for both:

  1. cheap and very accurate measurement of time intervals (not wall time),
  2. much less accurate, but monotonous and consistent with the above, measure of wall time

I think it does it pretty well. Example use are logs, where one can use timestamps not only to find time of events, but also reason about internal program timings, latency (in microseconds) etc.

I leave the plumbing (call to initial calibration, scheduling adjustment) as an exercise for gentle readers.

You can use boost date time library.

You can use boost::posix_time::hours, boost::posix_time::minutes, boost::posix_time::seconds, boost::posix_time::millisec, boost::posix_time::nanosec

http://www.boost.org/doc/libs/1_39_0/doc/html/date_time.html

One popular way is using the QueryPerformanceCounter() call. This is useful if you need high-precision timing, such as for measuring durations that only take on the order of microseconds. I believe this is implemented using the RDTSC machine instruction.

There might be issues though, such as the counter frequency varying with power-saving, and synchronization between multiple cores. See the Wikipedia link above for details on these issues.

Take a look at the Windows APIs GetSystemTime() / GetLocalTime() or GetSystemTimeAsFileTime().

GetSystemTimeAsFileTime() expresses time in 100 nanosecond intervals, that is 1/10 of a microsecond. All functions provide the current time with in millisecond accuracy.

EDIT:

Keep in mind, that on most Windows systems the system time is only updated about every 1 millisecond. So even representing your time with microsecond accuracy makes it still necessary to acquire the time with such a precision.

May be this can help:

NTSTATUS WINAPI NtQuerySystemTime(__out  PLARGE_INTEGER SystemTime);

SystemTime [out] - a pointer to a LARGE_INTEGER structure that receives the system time. This is a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC).

受限制的 HTML

  • 允许的HTML标签:<a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • 自动断行和分段。
  • 网页和电子邮件地址自动转换为链接。

相关推荐
  • CSDN著名技术专家Visual C++2010开发体验心得——从Visual C++6.0到Visual C++2010见证VC++辉煌时刻
    IT业是一个创造奇迹的行业,IT业也是一个年轻的行业,IT业更是一个不断更新的行业。在2009年,微软已经连续推出Visual Studio2010 beta1 与 Visual Studio2010 beta2 两个版本.目前Visual Studio2010的RC版已经发布, RC版本已经展示了Visual Studio2010的所有新特性与革新,自从Visual Studio 98问世推出以来,Visual Studio 98-> 2002 -> 2003 ->2005 ->2008 ->2010,产品几乎平均2年就要深度的更新一次,带来更加高效的代码生产力,给程序员带来更多新奇与便捷。Visual Studio 2010在2010年4月12日全球5个城市:北京、拉斯维加斯、伦敦、班加罗尔和吉隆坡将同时发布。由于时差关系,北京成为了绝对时间上第一个发布的城市。我们将见证这个辉煌的时刻,正如Visual Studio 98发布的时刻。Visual C++曾经是Visual Studio 6.0中的首席编程语言,Visual C++6.0是中国C++开发人员使用最多的开发工具。 Visual C++是构建Windows平台下应用程序功能最强大而又最复杂的工具,是目前世界上使用最多的开发工具之一,应用极度广泛,从桌面应用程序到服务器程序,从系统软件到应用软件,图形图像,3D游戏
  • 在 Windows 上用 C++ 计算 CPU 时间(Computing CPU time in C++ on Windows)
    问题 在 C++ 中有什么方法可以计算在 CPU 时间内运行给定程序或例程需要多长时间? 我使用在 Windows 7 上运行的 Visual Studio 2008。 回答1 如果您想知道进程使用的 CPU 时间总量, clock和rdtsc (直接或通过编译器内部)都不是最好的选择,至少在 IMO。 如果你需要代码是可移植的,你能做的最好的事情就是使用clock ,在系统尽可能静止的情况下进行测试,并希望最好(但如果你这样做,请注意clock的分辨率是CLOCKS_PER_SEC ,它可能是也可能不是 1000,即使是,您的实际计时分辨率通常也不会那么好——它可能会给您以毫秒为单位的时间,但至少通常一次提前几十毫秒)。 但是,因为您似乎并不介意特定于 Windows 的代码,所以您可以做得更好。 至少如果我对您正在寻找的内容的理解是正确的,那么您真正想要的可能是 GetProcessTimes,它将(单独)告诉您进程的内核模式和用户模式 ​​CPU 使用率(以及开始时间)和退出时间,如果您关心的话,您可以从中计算出使用的挂墙时间)。 还有 QueryProcessCycleTime,它会告诉您进程使用的 CPU 时钟周期总数(所有线程中用户模式和内核模式的总数)。 就我个人而言,我很难想象后者有多大用处——计算单个时钟周期对于需要密集优化的小部分代码可能很有用
  • Boost C++ 和 Windows CE 6.0(Boost C++ and Windows CE 6.0)
    问题 我已经成功地为 Windows CE 6.0 构建了 STLPort 和 Boost c++。 我可以在调试结束发布模式下使用 Windows CE 6 和 STLPort 运行应用程序。 我使用以下批处理文件构建了 boost: @echo off cls :build :release echo building boost in release shared library bjam ^ --with-system ^ --with-chrono ^ --with-date_time ^ --with-thread ^ --with-atomic ^ toolset=msvc-9.0~CEPlatform ^ variant=release ^ threading=multi ^ stdlib=stlport-5.2.1 ^ link=shared ^ runtime-link=shared :debug echo building boost in debug shared library bjam ^ --with-system ^ --with-chrono ^ --with-date_time ^ --with-thread ^ --with-atomic ^ toolset=msvc-9.0~CEPlatform ^ define=_STLP_DEBUG=1
  • Measure time, milliseconds or microseconds for Windows C++ [duplicate]
    This question already has answers here: How to Calculate Execution Time of a Code Snippet in C++ (18 answers) Closed 6 years ago. How do you measure the execution time in milliseconds or microseconds in Windows C++? I found many method one calling time(NULL), but it measures time in seconds only and the seconds clock() (clock_t) measure CPU time, not the actual time. I found the function gettimeofday(Calendar time) mentioned in this paper: dropbox.com/s/k0zv8pck7ydbakz/1_7-PDF_thesis_2.pdf This function is for Linux (compute time in milli and microseconds) and not Windows. I found an
  • 基于XP打造windows7的C++开发平台--vs2010+windows SDK for windows7
    基于XP打造windows7的C++开发平台--vs2010+windows SDK for windows7 IT业是一个创造奇迹的行业,IT业也是一个年轻的行业,IT业更是一个不断更新的行业。在今年2009年,微软已经连续推出visual studio2010 beta1 与 visual studio2010 beta2 两个版本.在2010.2月就要推出正式版了,beta版本已经展示了visual studio2010的所有新特性与革新,自从visual studio 2002推出以来,visual studio 2002 -> 2003 ->2005 ->2008 ->2010,产品几乎平均2年就要深度的更新一次,带来更加高效的代码生产力,给程序员带来更多新奇与便捷。 Visual C++是构建Windows平台下应用程序功能最强大而又最复杂的工具,是目前世界上使用最多的开发工具之一,应用极度广泛,从桌面应用程序到服务器程序,从系统软件到应用软件,图形图像,3D游戏,语音技术,设备驱动,网络通信等等几乎无处不在。C++曾经是Visual Studio 6.0中的首席语言,但是微软从2000年开始推行.NET战略以来,其核心及缺省的编程语言是C#,所以微软的开发工具Visual Studio系列,多年来对C++开发者的关怀和C#的开发者相比,一直相去甚远。C+
  • C++在Windows下的高精度计时器
    一、需求描述 C++在Windows下,通过使用QueryPerformanceCounter可以实现毫秒级的高精度计时 题主在使用过程中主要有两个方面:秒表(计算某段代码的执行时间)和计时器(相当于sleep,但精度更高) 二、实现代码 1、秒表:计算某段代码的执行时间 #include <iostream> #include <stdio.h> #include <windows.h> using namespace std; int main() { //初始化计时器 LARGE_INTEGER nFreq; LARGE_INTEGER nBeginTime, nEndTime; double time = 0; QueryPerformanceFrequency(&nFreq); //取开始执行时间 QueryPerformanceCounter(&nBeginTime); //这里填写需要执行的代码 //取结束执行时间 QueryPerformanceCounter(&nEndTime); //计算时间差 time = (double)(nEndTime.QuadPart - nBeginTime.QuadPart) * 1000 / (double)nFreq.QuadPart; system("pause"); return 0; } 2、计时器:sleep一段时间
  • C++ windows和linux获取当前时间(精确到毫秒)
    windows环境下: struct SYSTEMTIME { boost::uint16_t wYear; boost::uint16_t wMonth; boost::uint16_t wDayOfWeek; boost::uint16_t wDay; boost::uint16_t wHour; boost::uint16_t wMinute; boost::uint16_t wSecond; boost::uint16_t wMilliseconds; }; #include <windows.h> #include <iostream> using namespace std; int main(int argc, char* argv[]) { SYSTEMTIME st = { 0 }; GetLocalTime(&st); //获取当前时间 可精确到ms printf("%d-%02d-%02d %02d:%02d:%02d:%02d\n", st.wYear, st.wMonth, st.wDay, st.wHour, st.wMinute, st.wSecond, st.wMilliseconds); } Linux环境下: #include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include
  • Mysterious “Not enough quota is available to process this command” in WinRT port of DataGrid
    Edit Sept 26 See below for the full background. tl;dr: A data grid control is causing odd exceptions, and I am looking for help isolating the cause and finding a solution. I've narrowed this down a bit further. I have been able to reproduce the behavior in a smaller test app with more reliable triggering of the erratic behavior. I can definitely rule out both threading and (I think) memory issues. The new app uses no Tasks or other threading/asynchronous features, and I can trigger the unhandled exception simply by adding properties that return a constant to the class of objects shown in the
  • C++ high precision time measurement in Windows
    I'm interested in measuring a specific point in time down to the nanosecond using C++ in Windows. Is this possible? If it isn't, is it possible to get the specific time in microseconds at least?. Any library should do, unless I suppose it's possible with managed code. thanks
  • 在何处获得用于Windows SDK 8的单独的Visual C ++编译器?(Where to get separate Visual C++ compiler for Windows SDK 8?)
    问题 在Windows SDK 8中,“ Visual C ++编译器和C运行时(CRT)”已删除: 从此版本的Windows SDK中更改或删除了以下项目。 命令行构建环境Windows SDK不再附带完整的命令行构建环境。 Windows SDK现在需要单独安装编译器和构建环境。 在哪里可以单独获得这些文件而无需下载Visual Studio? 我想获得旧的命令行构建环境,但是找不到下载链接。 回答1 http://www.visualstudio.com/downloads/download-visual-studio-vs#d-express-windows-8 这是Visual Studio 2013的Express版页面。这些都可以同时安装,以进行基于Web / Store / Desktop / Console的应用程序开发。 它们可以用于商业目的,也可以下载为ISO文件,以便以后重新安装。 您可以安装桌面版本并选择要安装的组件,以获取安装的可视C ++编译器的副本。 这也适用于2010年版本。 回答2 命令行生成环境现已作为Visual C ++ 2015生成工具提供,可以从此处下载 这些工具使您可以构建针对Windows桌面的C ++库和应用程序。 它们与您在可编写脚本的独立安装程序中在Visual Studio 2015中找到的工具相同。 现在
  • Computing CPU time in C++ on Windows
    Is there any way in C++ to calculate how long does it take to run a given program or routine in CPU time? I work with Visual Studio 2008 running on Windows 7.
  • Programmatically getting system boot up time in c++ (windows)
    So quite simply, the question is how to get the system boot up time in windows with c/c++. Searching for this hasn't got me any answer, I have only found a really hacky approach which is reading a file timestamp ( needless to say, I abandoned reading that halfway ). Another approach that I found was actually reading windows diagnostics logged events? Supposedly that has last boot up time. Does anyone know how to do this (with hopefully not too many ugly hacks)?
  • 用C ++代码计算时间(Calculating time by the C++ code)
    问题 我知道这个问题已经被问过几次了,但是没有一个人真的在帮助我,所以再问一次。 我正在使用Windows XP并运行Visual Studio C ++ 2008。 我正在查找的所有代码都使用time.h,但我认为这可能在这里无法正常工作,因为结果使我感到怀疑。 这就是我想要的。 star time = get time some how (this is my question) my code end time = get time some how (this is my question) time passed = start time - end time 回答1 这是我用来打印时间(以毫秒为单位)的内容。 void StartTimer( _int64 *pt1 ) { QueryPerformanceCounter( (LARGE_INTEGER*)pt1 ); } double StopTimer( _int64 t1 ) { _int64 t2, ldFreq; QueryPerformanceCounter( (LARGE_INTEGER*)&t2 ); QueryPerformanceFrequency( (LARGE_INTEGER*)&ldFreq ); return ((double)( t2 - t1 ) / (double)ldFreq) *
  • C ++的跨平台睡眠功能(Cross platform Sleep function for C++)
    问题 宏是否可以使跨平台的睡眠代码? 例如 #ifdef LINUX #include <header_for_linux_sleep_function.h> #endif #ifdef WINDOWS #include <header_for_windows_sleep_function.h> #endif ... Sleep(miliseconds); ... 回答1 就在这里。 您要做的是将不同的系统sleep调用包装在您自己的函数以及如下的include语句中: #ifdef LINUX #include <unistd.h> #endif #ifdef WINDOWS #include <windows.h> #endif void mySleep(int sleepMs) { #ifdef LINUX usleep(sleepMs * 1000); // usleep takes sleep time in us (1 millionth of a second) #endif #ifdef WINDOWS Sleep(sleepMs); #endif } 然后,您的代码将调用mySleep进入睡眠状态,而不是直接进行系统调用。 回答2 是的。 但这仅在C ++ 11及更高版本中有效。 #include <chrono> #include <thread>
  • DataGrid的WinRT端口中出现神秘的“没有足够的配额来处理此命令”(Mysterious “Not enough quota is available to process this command” in WinRT port of DataGrid)
    问题 编辑9月26日 有关完整背景,请参见下文。 tl; dr:一个数据网格控件导致了异常异常,我正在寻求帮助,以找出原因并找到解决方案。 我进一步缩小了范围。 我已经能够在更小的测试应用程序中重现该行为,并且能够更可靠地触发不稳定的行为。 我绝对可以排除线程和(我认为)内存问题。 新应用程序不使用任何Tasks或其他线程/异步功能,并且我可以通过添加将常量返回给DataGrid中显示的对象类的属性来触发未处理的异常。 这向我表明问题出在未管理的资源耗尽或我尚未想到的问题上。 修改后的程序的结构如下。 我创建了一个名为EntityCollectionGridView的用户控件,该控件具有标签和数据网格。 在控件的Loaded事件处理程序中,我为具有1000或10000行的数据网格分配一个List<TestClass> ,让网格生成列。 在页面的OnNavigatedTo事件(或Loaded ,似乎无关紧要)中,在MainPage.xaml中将此用户控件实例化2-4次。 如果发生异常,则在显示MainPage之后立即发生。 有趣的是,该行为似乎并不随显示的行数而变化(它可以可靠地在10000行中工作,或者在每个网格中只有1000行时可靠地失败),而与所有网格中的总列数无关在给定的时间加载。 有20个要显示的属性,4个网格可以正常工作。 具有35个属性和4个网格,将引发异常。 但是
  • C++ 二进制文件在 Windows XP 上不起作用(C++ binary doesn't work on windows XP)
    问题 我在 Windows 7 上的 VS 11 Beta 中编译了一个非常基本的 C++ 程序。 除运行时库外,所有项目设置均为默认设置。 我将多线程 DLL (/MD) 更改为多线程 (/MT)。 据我所知,这静态链接了运行时库。 尝试在另一台安装了 VS 11 Beta 的 Windows 7 机器上运行这个 exe 工作正常。 尝试在未安装VS的windowx XP计算机上运行它会导致弹出错误提示。 “ * *.exe 不是有效的 Win32 应用程序” 是否有一些设置需要更改才能在 XP 上使用 VS 11 Beta 在 Win7 上编译的二进制文件? 回答1 VS 11 不再支持 Windows XP。 这是微软设计的。 http://connect.microsoft.com/VisualStudio/feedback/details/690617 此行为是在 Visual Studio 11 Beta 的 MFC 和 CRT 中设计的。 支持的最低操作系统为Windows Server 2008 SP2和Windows Vista。 Windows XP不是Beta版(设计时或运行时)支持的操作系统。 进一步阅读讨论,有可能在发布版本中支持 Windows XP。 不过我不会指望它。 编辑:微软已经让步:在 Visual Studio 2012 中使用 C++ 定位
  • 在另一台计算机上运行用Visual Studio生成的EXE文件的问题(Problems with running EXE file built with Visual Studio on another computer)
    问题 我使用Visual Studio在C ++中创建了一个客户端服务器应用程序。 现在,我想在另一台计算机(未安装Visual Studio)上运行客户端EXE文件,但是当我尝试运行EXE文件时,它会显示以下错误消息: 该应用程序启动失败,因为应用程序配置不正确。 重新安装该应用程序可能会解决此问题。 如何在计算机上没有安装任何东西的情况下运行EXE文件? 回答1 使用Visual Studio构建的应用程序依赖于Visual C ++ Redistibutable(VCRedist)。 当程序动态链接时,您的二进制文件将需要MSVCR**.dll (Microsoft C运行时库)。 在MSDN上,有一篇不错的文章叫做“重新分发Visual C ++文件”(适用于Visual Studio 2008) ,该文章指出如果未安装所需的Visual C ++库,则可能存在运行时错误: 您可能会收到以下错误消息之一,具体取决于您尝试在其上运行应用程序的Windows版本: 应用程序无法正确初始化(0xc0000135)。 该应用程序启动失败,因为应用程序配置不正确。 重新安装应用程序可能会解决此问题。 系统无法执行指定的程序。 Basically you have two options: 最简单的解决方案是将运行时库的动态链接更改为静态链接。 转到项目属性,然后在C / C ++
  • Boost C++ and Windows CE 6.0
    I've succefully built STLPort and Boost c++ for Windows CE 6.0. I can run application with Windows CE 6 and STLPort both in debug end release mode. I've built boost with the following batch file: @echo off cls :build :release echo building boost in release shared library bjam ^ --with-system ^ --with-chrono ^ --with-date_time ^ --with-thread ^ --with-atomic ^ toolset=msvc-9.0~CEPlatform ^ variant=release ^ threading=multi ^ stdlib=stlport-5.2.1 ^ link=shared ^ runtime-link=shared :debug echo building boost in debug shared library bjam ^ --with-system ^ --with-chrono ^ --with-date_time ^ --with
  • 以编程方式获取C ++中的系统启动时间(Windows)(Programmatically getting system boot up time in c++ (windows))
    问题 简单地说,问题是如何使用c / c ++在Windows中获得系统启动时间。 搜索这个并没有给我任何答案,我只发现了一种非常hacky的方法,它正在读取文件时间戳记(不用说,我放弃了中途阅读)。 我发现的另一种方法是实际上正在读取Windows诊断记录的事件? 据说那是最后一次启动时间。 有谁知道该怎么做(希望不会有太多丑陋的骇客)? 回答1 “ GetTickCount64”检索自系统启动以来经过的毫秒数。 一旦知道系统已运行了多长时间,只需从当前时间中减去此持续时间即可确定何时启动。 例如,使用C ++ 11 chrono库(Visual C ++ 2012支持): auto uptime = std::chrono::milliseconds(GetTickCount64()); auto boot_time = std::chrono::system_clock::now() - uptime; 回答2 您也可以使用WMI来获得准确的启动时间。 WMI不适合胆小者,但是它将为您提供所需的东西。 有问题的信息位于LastBootUpTime属性下的Win32_OperatingSystem对象上。 您可以使用WMI工具检查其他属性。 编辑:如果您愿意,也可以从命令行获取此信息。 wmic OS Get LastBootUpTime 作为C#的示例,它看起来如下所示
  • 在 c++ 中使用带有 Windows API 的多线程列出文件(Listing Files using Multithreading with windows API in c++)
    问题 我写了一个代码,将目录作为输入并输出其中的文件列表。我在单线程中实现它。为了使用多线程实现该怎么做? 给我提供逻辑或代码。 平台:Windows API:Windows.h 语言:c++ #include <iostream> #include <Windows.h> #include <queue> #include <TlHelp32.h> #include <string> #include <vector> using namespace std; #define MAX_THREADS 1 int Files = 0; vector<string>files; vector<string> ListContents(string path,vector<string>&files) { HANDLE hfind = INVALID_HANDLE_VALUE; WIN32_FIND_DATAA ffd; string spec; wstring ws; deque<string> directories; directories.push_back(path); files.clear(); while(!directories.empty()) { path = directories.back(); spec = path + "\\" + "*"