International Core Journal of Engineering 2020-26 | Page 65
D. Replacement Cost
The replacement cost refers to the time cost of replacing
and updating the data in the cache. Although many algorithms
are improved based on LRU, and the hit rate is improved, the
LRU implementation method is simpler and the replacement
cost is lower. Thus, LRU is still widely used. From Fig. 7, it
can be seen that LRU has the lowest replacement cost. The
replacement cost of DLK is slightly higher than LRU.
However, considering the hit rate, there is a clear
improvement in the DLK, so it is acceptable to have a small
loss in the replacement cost. LRU-K needs to maintain
historical access queues, and the time it takes to count the
number of occurrences of a record in the historical access
queue is related to the length of the historical access queue.
The longer the historical access queue, the longer the search
takes, hence the replacement cost of LRU-K is relatively high.
(2) adding the prediction of cache data to achieve the
pre-stored of data, while improving the hit rate.
A CKNOWLEDGMENT
Zhaohui Zhang is the corresponding author. This work
was supported by National Natural Science Foundation of
China (No. 61472004, 61602109), Shanghai Science and
Technology
Innovation
Action
Plan
Project
(No.16511100903), and by The Key Laboratory of Embedded
System and Service Computing of Tongji University of
Ministry Education (2015).
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
Fig. 7 Replacement Cost.
[8]
V. C ONCLUSION
[9]
A double frequency queue K cache replacement algorithm
for record is proposed, which makes up for the lack of
attention to data access frequency on the basis of LRU and
adopts different queues to cache different access frequency
data. When DLK evicts the data which is in the second-level
list, the data is not evicted directly, whereas this method
changes it to a low frequency access data and inserts it in the
appropriate location of first-level list according to the access
time. We divide the memory into data area and list area to
enhance the performance of the query operation, and split the
retrieval operation and replacement operation to update cache
asynchronously. Our comprehensive experimental evaluation
shows that this method significantly improves the hit rate and
query time.
[10]
[11]
[12]
[13]
As the future work, we will continue to improve this work
in two directions: (1) reducing the replacement cost of DLK.
43
F. Chang, J. Dean, S. Ghemawat, W.C. Hsieh, D.A. Wallach, M.
Burrows, T. Chandra, A. Fikes, R.E.Gruber, “Bigtable: A distributed
storage system for structured data”, ACM Transactions on Computer
Systems, vol. 26, no. 2, pp. 4, 2018.
Information on http://hbase.apache.org.
Information on http://hadoop.apache.org.
N. Dimiduk, A. Khurana, M.H. Ryan and others, HBase in action,
Manning Shelter Island, 2013.
L.A. Belad, “A study of replacement algorithms for a virtual-storage
computer”, IBM Systems journal, vol. 5, no. 2, pp. 78-101, 1996.
E.J. O'neil, P.E. O'neil, G. Weikum, “The LRU-K page replacement
algorithm for database disk buffering”, ACM SIGMOD Record, vol. 22,
no. 2, pp. 297-306, 1993.
T. Johnson, D.S. Sha, “2Q: A Low Overhead High Performance Buffer
Management Replacement Algorithm”, International Conference on
Very Large Data Bases, pp. 439-450, 1994.
Y.Y. Zhou, J. Philbin, K. Li, “The Multi-Queue Replacement
Algorithm for Second Level Buffer Caches”, USENIX Annual
Technical Conference, General Track, pp. 91-104, 2001.
N. Megiddo, D.s. Modha, “Outperforming LRU with an adaptive
replacement cache algorithm”, Computer, vol. 37, no. 4, pp. 58-65,
2004.
Tsai H.B., Lei C.L. (2018) Time-shift replacement algorithm for main
memory performance optimization. Journal of Supercomputing, 74(6),
2729-2746.
S. Huang, Q.S. Wei, D. Feng, J.X. Chen, C. Chen, “Improving
flash-based disk cache with lazy adaptive replacement”, ACM
Transactions on Storage (TOS), vol. 12, no. 2, pp. 8, 2016.
W. Ge, S.M. Luo, W.H. Zhou, D. Zhao, Y. Tang, J. Zhou, W.W. Qu,
C.F. Yuan, Y.H. Huang, “HiBase: A hierarchical indexing mechanism
and system for efficient HBase query”, Chinese Journal of Computers,
vol. 39, no. 1, pp. 140-153, 2016.
J. Zhang, G.Q. Wu, X.G. Hu, X.D. Wu, “A distributed cache for
hadoop distributed file system in real-time cloud services”, Grid
Computing (GRID), 2012 ACM/IEEE 13th International Conference,
pp. 12-21, 2012.