内存延迟
- 网络memory latency
-
现在,芯片级多处理能够在单个芯片上提供更多的CPU,由于减少了内存延迟,因而可获得更高的性能。
Today , chip-level multiprocessing provides more CPUs on a single chip , permitting even greater performance due to reduced memory latency .
-
鉴于这些趋势,预计内存延迟将成为压倒性的计算机性能的瓶颈。
Given these trends , it was expected that memory latency would become an overwhelming bottleneck in computer performance .
-
这就增加了任务的内存访问延迟,这些时间用来将其数据移入新CPU的内存中。
This increases the latency of the task 's memory access until its data is in the cache of the new CPU .
-
针对GMS模型的特点,提出并分析了需要重点解决的关键技术问题,包括访问接口与编程模型、可扩展的架构和互连协议、智能内存和延迟隐藏等。
This thesis analyzed key technologies of GMS ; including service mode and programming model , scalable architecture design , interconnect protocol and network , intelligent memory management and latency tolerance .
-
与使用Java编写的其他产品一样,这些产品在部署时可以灵活调节,并且可以将整个数据库放到内存中,延迟只有微妙级别。
These products , like any other product written in Java , will enjoy tuning-free deployment and the ability to cache entire databases in-memory at microsecond latency .
-
第一部分讨论分页、内存和I/O延迟,第二部分主要关注磁盘I/O和网络。
Part one covers paging , memory and I / O delays and part two will focus on disk I / O and the network .
-
但是,功能强大电脑的建模员喜欢立即打开所有的片段,以消除片段载入到内存时的延迟。
However , modelers with powerful machines might prefer to open all fragments immediately to eliminate the delay in case the fragment is loaded into memory later on .
-
为了对DOM解析进行优化,提出并实现了占用内存资源较少的延迟展开的方法;
As optimized way of DOM parsing named deferred expanding is provided with less memory than common DOM parsing .
-
每个处理器可同等地访问共享内存(具有相同的内存空间访问延迟)。
Each processor has equal access to the shared memory ( the same access latency to the memory space ) .
-
CMP紧密耦合的本质使处理器与内存之间的物理距离很短,因此可提供最小的内存访问延迟和更高的性能。
The tightly-coupled nature of the CMP allows very short physical distances between processors and memory and , therefore , minimal memory access latency and higher performance .