


Detailed graphic explanation of the memory and GC of the Node V8 engine
Mar 29, 2023 pm 06:02 PMThis article will give you an in-depth understanding of the memory and garbage collector (GC) of the NodeJS V8 engine. I hope it will be helpful to you!
1. Why GC is needed
Program applications need to use memory, and the two partitions of memory are what we often discuss. Concepts: stack area and heap area.
The stack area is a linear queue, which is automatically released as the function ends, while the heap area is a free dynamic memory space, and the heap memory is manually allocated and released or garbage collection program(Garbage Collection (hereinafter referred to as GC) is automatically allocated and released.
In the early days of software development or some languages, heap memory was allocated and released manually, such as C, C . Although it can accurately operate memory and achieve the best possible memory usage, the development efficiency is very low and it is prone to improper memory operation. [Related tutorial recommendations: nodejs video tutorial, Programming teaching]
With the development of technology, high-level languages ??(such as Java Node ) do not require developers to manually operate memory, and the programming language will automatically allocate and release space. At the same time, the GC (Garbage Collection) garbage collector was also born to help release and organize memory. In most cases, developers do not need to care about the memory itself and can focus on business development. The following article mainly discusses heap memory and GC.
2. GC Development
GC operation will consume CPU resources. The GC operation process will trigger STW (stop-the-world) to suspend the business code thread. Why? What about STW? This is to ensure that there will be no conflict with newly created objects during the GC process.
GC mainly develops and evolves with the increase in memory size. It is roughly divided into 3 major representative stages:
- Phase 1 single-threaded GC (represents: serial)
Single-threaded GC, in which garbage is performed When collecting, you mustcompletely pause all other worker threads, which is the initial stage of GC and has the worst performance
- Phase two parallel multi-thread GC (represents :Parallel Scavenge, ParNew)
Use multiple GC threads to run in parallel at the same time in a multi-CPU environment, thereby reducing the garbage collection time and user thread pause time. This algorithm also Will STW, Completely suspend all other working threads
- Phase three multi-thread concurrent concurrent GC (representative: CMS (Concurrent Mark Sweep) G1)
The concurrency here means: GC multi-thread execution can run concurrently with business code.
The GC algorithms in the previous two development stages will be completely STW, but in concurrent GC, some stages of GC threads can run concurrently with the business code, ensuring a shorter STW time. However, there will be marking errors in this mode, because new objects may come in during the GC process. Of course, the algorithm itself will correct and solve this problem.
The above three stages do not mean that GC must be as described above. One of three types. GCs in different programming languages ??are implemented using a variety of algorithm combinations according to different needs.
3. v8 memory partition and GC
Heap memory design and GC design are closely related. V8 divides the heap memory into several major areas and adopts a generational strategy.
Stealed pictures:
- New-space or young-generation: The space is small and divided into Two half-spaces (semi-space), where the data has a short lifetime.
- Old generation (old-space or old-generation): Large space, can be incremented, and the data survival period is long
- Large object space ( large-object-space) : Objects exceeding 256K will be in this space by default, explained below
- Code-space (code-space) : Just-in-time compiler (JIT) in Compiled code is stored here
- Metaspace (cell space): This space is used to store small, fixed-size JavaScript objects, such as numbers and Boolean values.
- Property cell space : This space is used to store special JavaScript objects, such as accessor properties and certain internal objects.
- Map Space: This space is used to store meta information and other internal data structures for JavaScript objects, such as Map and Set objects.
3.1 Generational strategy: new generation and old generation
In Node.js, GC adopts generational generation The strategy is divided into new and old generation areas, and most of the memory data is in these two areas.
3.1.1 The new generation
The new generation is a small, fast memory pool that stores objects with young age and is divided into two half-spaces (semi-space). Half of the space is free (called to space), and the other half of the space stores data (called from space).
When objects are first created, they are allocated to the young generation from half-space, which has an age of 1. When from is insufficient or exceeds a certain size, Minor GC (using the copy algorithm Scavenge) will be triggered. At this time, the GC will suspend the execution of the application. (STW, stop-the-world), mark all active objects in the (from space), and then organize and move them continuously to another free space (to space) in the new generation. Finally, all the memory in the original from space will be released and become free space. The two spaces will complete the swap of from and to. The copy algorithm is An algorithm that sacrifices space for time.
The space of the new generation is smaller, so this space will trigger GC more frequently. At the same time, the scanned space is smaller, the GC performance consumption is also smaller, and its GC execution time is also shorter.
Every time a Minor GC is completed, the age of the surviving objects is 1. Objects that have survived multiple Minor GCs (age greater than N) will be moved to the old generation memory pool.
3.1.2 Old generation
The old generation is a large memory pool used to store long-lived objects. Old generation memory uses Mark-Sweep and Mark-Compact algorithm. One execution of it is called Mayor GC. When the objects in the old generation fill a certain proportion, that is, the ratio of surviving objects to total objects exceeds a certain threshold, a mark clearing or marking compression will be triggered.
Because its space is larger, its GC execution time is also longer, and its frequency is lower than that of the new generation. If there is still insufficient space after the old generation completes GC recycling, V8 will apply for more memory from the system.
You can manually execute the global.gc() method, set different parameters, and actively trigger GC. However, it should be noted that this method is disabled by default in Node.js. If you want to enable it, you can enable it by adding the --expose-gc parameter when starting the Node.js application, for example:
node --expose-gc app.js
V8 Mark is mainly used in the old generation Garbage collection is performed by combining -Sweep and Mark-Compact.
Mark-Sweep means mark sweep, which is divided into two stages, mark and sweep. Mark-Sweep In the marking phase, all objects in the heap are traversed and live objects are marked. In the subsequent clearing phase, only unmarked objects are cleared.
Mark-Sweep The biggest problem is that after a mark sweep is performed, the memory space will become discontinuous. This kind of memory fragmentation will cause problems for subsequent memory allocation, because it is very likely that a large object needs to be allocated. At this time, all the fragmented space cannot complete the allocation, and garbage collection will be triggered in advance, and this recycling is unnecessary.
In order to solve the memory fragmentation problem of Mark-Sweep, Mark-Compact was proposed. Mark-Compact means mark compilation, which is based on Mark-Sweep. The difference between them is that after the object is marked as dead, during the cleaning process, the living objects are moved to one end. After the movement is completed, the memory outside the boundary is directly cleared. V8 It will also release a certain amount of free memory and return it to the system based on certain logic.
3.2 Large object space large object space
Large objects will be created directly in the large object space and will not be moved to other spaces. So how big an object will be created directly in the large object space instead of in the new generation from area? After consulting the information and source code, I finally found the answer. By default it is 256K, V8 does not seem to expose modification commands, the v8_enable_hugepage configuration in the source code should be set when packaging.
// There is a separate large object space for objects larger than // Page::kMaxRegularHeapObjectSize, so that they do not have to move during // collection. The large object space is paged. Pages in large object space // may be larger than the page size.
(1 << (18 - 1)) 的結(jié)果 256K (1 << (19 - 1)) 的結(jié)果 256K (1 << (21 - 1)) 的結(jié)果 1M(如果開啟了hugPage)
四、V8 新老分區(qū)大小
4.1 老生代分區(qū)大小
在v12.x 之前:
為了保證 GC 的執(zhí)行時間保持在一定范圍內(nèi),V8 限制了最大內(nèi)存空間,設(shè)置了一個默認(rèn)老生代內(nèi)存最大值,64位系統(tǒng)中為大約1.4G,32位為大約700M,超出會導(dǎo)致應(yīng)用崩潰。
如果想加大內(nèi)存,可以使用 --max-old-space-size 設(shè)置最大內(nèi)存(單位:MB)
node --max_old_space_size=
在v12以后:
V8 將根據(jù)可用內(nèi)存分配老生代大小,也可以說是堆內(nèi)存大小,所以并沒有限制堆內(nèi)存大小。以前的限制邏輯,其實(shí)不合理,限制了 V8 的能力,總不能因?yàn)?GC 過程消耗的時間更長,就不讓我繼續(xù)運(yùn)行程序吧,后續(xù)的版本也對 GC 做了更多優(yōu)化,內(nèi)存越來越大也是發(fā)展需要。
如果想要做限制,依然可以使用 --max-old-space-size 配置, v12 以后它的默認(rèn)值是0,代表不限制。
參考文檔:nodejs.medium.com/introducing…
4.2 新生代分區(qū)大小
新生代中的一個 semi-space 大小 64位系統(tǒng)的默認(rèn)值是16M,32位系統(tǒng)是8M,因?yàn)橛?個 semi-space,所以總大小是32M、16M。
--max-semi-space-size
--max-semi-space-size 設(shè)置新生代 semi-space 最大值,單位為MB。
此空間不是越大越好,空間越大掃描的時間就越長。這個分區(qū)大部分情況下是不需要做修改的,除非針對具體的業(yè)務(wù)場景做優(yōu)化,謹(jǐn)慎使用。
--max-new-space-size
--max-new-space-size 設(shè)置新生代空間最大值,單位為KB(不存在)
有很多文章說到此功能,我翻了下 nodejs.org 網(wǎng)頁中 v4 v6 v7 v8 v10的文檔都沒有看到有這個配置,使用 node --v8-options 也沒有查到,也許以前的某些老版本有,而現(xiàn)在都應(yīng)該使用 --max-semi-space-size。
五、 內(nèi)存分析相關(guān)API
5.1 v8.getHeapStatistics()
執(zhí)行 v8.getHeapStatistics(),查看 v8 堆內(nèi)存信息,查詢最大堆內(nèi)存 heap_size_limit,當(dāng)然這里包含了新、老生代、大對象空間等。我的電腦硬件內(nèi)存是 8G,Node版本16x,查看到 heap_size_limit 是4G。
{ total_heap_size: 6799360, total_heap_size_executable: 524288, total_physical_size: 5523584, total_available_size: 4340165392, used_heap_size: 4877928, heap_size_limit: 4345298944, malloced_memory: 254120, peak_malloced_memory: 585824, does_zap_garbage: 0, number_of_native_contexts: 2, number_of_detached_contexts: 0 }
到 k8s 容器中查詢 NodeJs 應(yīng)用,分別查看了v12 v14 v16版本,如下表。看起來是本身系統(tǒng)當(dāng)前的最大內(nèi)存的一半。128M 的時候,為啥是 256M,因?yàn)槿萜髦羞€有交換內(nèi)存,容器內(nèi)存實(shí)際最大內(nèi)存限制是內(nèi)存限制值 x2,有同等的交換內(nèi)存。
所以結(jié)論是大部分情況下 heap_size_limit 的默認(rèn)值是系統(tǒng)內(nèi)存的一半。但是如果超過這個值且系統(tǒng)空間足夠,V8 還是會申請更多空間。當(dāng)然這個結(jié)論也不是一個最準(zhǔn)確的結(jié)論。而且隨著內(nèi)存使用的增多,如果系統(tǒng)內(nèi)存還足夠,這里的最大內(nèi)存還會增長。
容器最大內(nèi)存 | heap_size_limit |
---|---|
4G | 2G |
2G | 1G |
1G | 0.5G |
1.5G | 0.7G |
256M | 256M |
128M | 256M |
5.2 process.memoryUsage
process.memoryUsage() { rss: 35438592, heapTotal: 6799360, heapUsed: 4892976, external: 939130, arrayBuffers: 11170 }
通過它可以查看當(dāng)前進(jìn)程的內(nèi)存占用和使用情況 heapTotal、heapUsed,可以定時獲取此接口,然后繪畫出折線圖幫助分析內(nèi)存占用情況。以下是 Easy-Monitor 提供的功能:
建議本地開發(fā)環(huán)境使用,開啟后,嘗試大量請求,會看到內(nèi)存曲線增長,到請求結(jié)束之后,GC觸發(fā)后會看到內(nèi)存曲線下降,然后再嘗試多次發(fā)送大量請求,這樣往復(fù)下來,如果發(fā)現(xiàn)內(nèi)存一直在增長低谷值越來越高,就可能是發(fā)生了內(nèi)存泄漏。
5.3 開啟打印GC事件
使用方法
node --trace_gc app.js // 或者 v8.setFlagsFromString('--trace_gc');
- --trace_gc
[40807:0x148008000] 235490 ms: Scavenge 247.5 (259.5) -> 244.7 (260.0) MB, 0.8 / 0.0 ms (average mu = 0.971, current mu = 0.908) task [40807:0x148008000] 235521 ms: Scavenge 248.2 (260.0) -> 245.2 (268.0) MB, 1.2 / 0.0 ms (average mu = 0.971, current mu = 0.908) allocation failure [40807:0x148008000] 235616 ms: Scavenge 251.5 (268.0) -> 245.9 (268.8) MB, 1.9 / 0.0 ms (average mu = 0.971, current mu = 0.908) task [40807:0x148008000] 235681 ms: Mark-sweep 249.7 (268.8) -> 232.4 (268.0) MB, 7.1 / 0.0 ms (+ 46.7 ms in 170 steps since start of marking, biggest step 4.2 ms, walltime since start of marking 159 ms) (average mu = 1.000, current mu = 1.000) finalize incremental marking via task GC in old space requested
GCType <heapUsed before> (<heapTotal before>) -> <heapUsed after> (<heapTotal after>) MB
上面的 Scavenge 和 Mark-sweep 代表GC類型,Scavenge 是新生代中的清除事件,Mark-sweep 是老生代中的標(biāo)記清除事件。箭頭符號前是事件發(fā)生前的實(shí)際使用內(nèi)存大小,箭頭符號后是事件結(jié)束后的實(shí)際使用內(nèi)存大小,括號內(nèi)是內(nèi)存空間總值??梢钥吹叫律惺录l(fā)生的頻率很高,而后觸發(fā)的老生代事件會釋放總內(nèi)存空間。
- --trace_gc_verbose
展示堆空間的詳細(xì)情況
v8.setFlagsFromString('--trace_gc_verbose'); [44729:0x130008000] Fast promotion mode: false survival rate: 19% [44729:0x130008000] 97120 ms: [HeapController] factor 1.1 based on mu=0.970, speed_ratio=1000 (gc=433889, mutator=434) [44729:0x130008000] 97120 ms: [HeapController] Limit: old size: 296701 KB, new limit: 342482 KB (1.1) [44729:0x130008000] 97120 ms: [GlobalMemoryController] Limit: old size: 296701 KB, new limit: 342482 KB (1.1) [44729:0x130008000] 97120 ms: Scavenge 302.3 (329.9) -> 290.2 (330.4) MB, 8.4 / 0.0 ms (average mu = 0.998, current mu = 0.999) task [44729:0x130008000] Memory allocator, used: 338288 KB, available: 3905168 KB [44729:0x130008000] Read-only space, used: 166 KB, available: 0 KB, committed: 176 KB [44729:0x130008000] New space, used: 444 KB, available: 15666 KB, committed: 32768 KB [44729:0x130008000] New large object space, used: 0 KB, available: 16110 KB, committed: 0 KB [44729:0x130008000] Old space, used: 253556 KB, available: 1129 KB, committed: 259232 KB [44729:0x130008000] Code space, used: 10376 KB, available: 119 KB, committed: 12944 KB [44729:0x130008000] Map space, used: 2780 KB, available: 0 KB, committed: 2832 KB [44729:0x130008000] Large object space, used: 29987 KB, available: 0 KB, committed: 30336 KB [44729:0x130008000] Code large object space, used: 0 KB, available: 0 KB, committed: 0 KB [44729:0x130008000] All spaces, used: 297312 KB, available: 3938193 KB, committed: 338288 KB [44729:0x130008000] Unmapper buffering 0 chunks of committed: 0 KB [44729:0x130008000] External memory reported: 20440 KB [44729:0x130008000] Backing store memory: 22084 KB [44729:0x130008000] External memory global 0 KB [44729:0x130008000] Total time spent in GC : 199.1 ms
- --trace_gc_nvp
每次GC事件的詳細(xì)信息,GC類型,各種時間消耗,內(nèi)存變化等
v8.setFlagsFromString('--trace_gc_nvp'); [45469:0x150008000] 8918123 ms: pause=0.4 mutator=83.3 gc=s reduce_memory=0 time_to_safepoint=0.00 heap.prologue=0.00 heap.epilogue=0.00 heap.epilogue.reduce_new_space=0.00 heap.external.prologue=0.00 heap.external.epilogue=0.00 heap.external_weak_global_handles=0.00 fast_promote=0.00 complete.sweep_array_buffers=0.00 scavenge=0.38 scavenge.free_remembered_set=0.00 scavenge.roots=0.00 scavenge.weak=0.00 scavenge.weak_global_handles.identify=0.00 scavenge.weak_global_handles.process=0.00 scavenge.parallel=0.08 scavenge.update_refs=0.00 scavenge.sweep_array_buffers=0.00 background.scavenge.parallel=0.00 background.unmapper=0.04 unmapper=0.00 incremental.steps_count=0 incremental.steps_took=0.0 scavenge_throughput=1752382 total_size_before=261011920 total_size_after=260180920 holes_size_before=838480 holes_size_after=838480 allocated=831000 promoted=0 semi_space_copied=4136 nodes_died_in_new=0 nodes_copied_in_new=0 nodes_promoted=0 promotion_ratio=0.0% average_survival_ratio=0.5% promotion_rate=0.0% semi_space_copy_rate=0.5% new_space_allocation_throughput=887.4 unmapper_chunks=124 [45469:0x150008000] 8918234 ms: pause=0.6 mutator=110.9 gc=s reduce_memory=0 time_to_safepoint=0.00 heap.prologue=0.00 heap.epilogue=0.00 heap.epilogue.reduce_new_space=0.04 heap.external.prologue=0.00 heap.external.epilogue=0.00 heap.external_weak_global_handles=0.00 fast_promote=0.00 complete.sweep_array_buffers=0.00 scavenge=0.50 scavenge.free_remembered_set=0.00 scavenge.roots=0.08 scavenge.weak=0.00 scavenge.weak_global_handles.identify=0.00 scavenge.weak_global_handles.process=0.00 scavenge.parallel=0.08 scavenge.update_refs=0.00 scavenge.sweep_array_buffers=0.00 background.scavenge.parallel=0.00 background.unmapper=0.04 unmapper=0.00 incremental.steps_count=0 incremental.steps_took=0.0 scavenge_throughput=1766409 total_size_before=261207856 total_size_after=260209776 holes_size_before=838480 holes_size_after=838480 allocated=1026936 promoted=0 semi_space_copied=3008 nodes_died_in_new=0 nodes_copied_in_new=0 nodes_promoted=0 promotion_ratio=0.0% average_survival_ratio=0.5% promotion_rate=0.0% semi_space_copy_rate=0.3% new_space_allocation_throughput=888.1 unmapper_chunks=124
5.4 內(nèi)存快照
const { writeHeapSnapshot } = require('node:v8'); v8.writeHeapSnapshot()
打印快照,將會STW,服務(wù)停止響應(yīng),內(nèi)存占用越大,時間越長。此方法本身就比較費(fèi)時間,所以生成的過程預(yù)期不要太高,耐心等待。
注意:生成內(nèi)存快照的過程,會STW(程序?qū)和#缀鯚o任何響應(yīng),如果容器使用了健康檢測,這時無法響應(yīng)的話,容器可能被重啟,導(dǎo)致無法獲取快照,如果需要生成快照、建議先關(guān)閉健康檢測。
兼容性問題:此 API arm64 架構(gòu)不支持,執(zhí)行就會卡住進(jìn)程 生成空快照文件 再無響應(yīng), 如果使用庫 heapdump,會直接報(bào)錯:
(mach-o file, but is an incompatible architecture (have (arm64), need (x86_64))
此 API 會生成一個 .heapsnapshot 后綴快照文件,可以使用 Chrome 調(diào)試器的“內(nèi)存”功能,導(dǎo)入快照文件,查看堆內(nèi)存具體的對象數(shù)和大小,以及到GC根結(jié)點(diǎn)的距離等。也可以對比兩個不同時間快照文件的區(qū)別,可以看到它們之間的數(shù)據(jù)量變化。
六、利用內(nèi)存快照分析內(nèi)存泄漏
一個 Node 應(yīng)用因?yàn)閮?nèi)存超過容器限制經(jīng)常發(fā)生重啟,通過容器監(jiān)控后臺看到應(yīng)用內(nèi)存的曲線是一直上升的,那應(yīng)該是發(fā)生了內(nèi)存泄漏。
使用 Chrome 調(diào)試器對比了不同時間的快照。發(fā)現(xiàn)對象增量最多的是閉包函數(shù),繼而展開查看整個列表,發(fā)現(xiàn)數(shù)據(jù)量較多的是 mongo 文檔對象,其實(shí)就是閉包函數(shù)內(nèi)的數(shù)據(jù)沒有被釋放,再通過查看 Object 列表,發(fā)現(xiàn)同樣很多對象,最外層的詳情顯示的是 Mongoose 的 Connection 對象。
到此為止,已經(jīng)大概定位到一個類的 mongo 數(shù)據(jù)存儲邏輯附近有內(nèi)存泄漏。
再看到 Timeout 對象也比較多,從 GC 根節(jié)點(diǎn)距離來看,這些對象距離非常深。點(diǎn)開詳情,看到這一層層的嵌套就定位到了代碼中準(zhǔn)確的位置。因?yàn)槟莻€類中有個定時任務(wù)使用 setInterval 定時器去分批處理一些不緊急任務(wù),當(dāng)一個 setInterval 把事情做完之后就會被 clearInterval 清除。
Leak resolution and optimization
Through code logic analysis, we finally found the problem. It was the trigger condition of clearInterval that caused the timer to not be cleared. The cycle continues. The timer keeps executing. This code and the data in it are still in the closure and cannot be recycled by GC, so the memory will become larger and larger until it reaches the upper limit and crashes.
The method of using setInterval here is unreasonable. By the way, it was changed to use for await queue sequential execution, so as to avoid a large number of concurrency at the same time. The code is also Much clearer. Since this piece of code is relatively old, I won’t consider why setInterval was used in the first place.
After more than ten days of observation after the new version was released, the average memory remained at just over 100M. The GC normally recycled the temporarily increased memory, showing a wavy curve, and no more leaks occurred.
So far, the memory leak has been analyzed and resolved using memory snapshots. Of course, the actual analysis requires a bit of twists and turns. The content of this memory snapshot is not easy to understand and not so straightforward. The display of snapshot data is type aggregation. You need to look at different constructors and internal data details, combined with comprehensive analysis of your own code, to find some clues. For example, judging from the memory snapshot I got at that time, there was a large amount of data including closures, strings, mongo model classes, Timeout, Object, etc. In fact, these incremental data all came from the problematic code. , and cannot be recycled by GC.
6. Finally
Different languages ??have different GC implementations, such as Java and Go:
Java: Understand JVM (corresponding to Node V8). Java also adopts the generational strategy. There is also an eden area in its new generation. , new objects are created in this area. The V8 new generation does not have the eden area.
Go: Using mark removal, three-color marking algorithm
Different languages ??have different GC implementations, but essentially they are all implemented using a combination of different algorithms. In terms of performance, different combinations bring different performance efficiencies in all aspects, but they all trade off and are just biased towards different application scenarios.
For more node-related knowledge, please visit: nodejs tutorial!
The above is the detailed content of Detailed graphic explanation of the memory and GC of the Node V8 engine. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This article will give you an in-depth understanding of the memory and garbage collector (GC) of the NodeJS V8 engine. I hope it will be helpful to you!

The Node service built based on non-blocking and event-driven has the advantage of low memory consumption and is very suitable for handling massive network requests. Under the premise of massive requests, issues related to "memory control" need to be considered. 1. V8’s garbage collection mechanism and memory limitations Js is controlled by the garbage collection machine

The event loop is a fundamental part of Node.js and enables asynchronous programming by ensuring that the main thread is not blocked. Understanding the event loop is crucial to building efficient applications. The following article will give you an in-depth understanding of the event loop in Node. I hope it will be helpful to you!

Why does Go have a GMP scheduling model? The following article will introduce to you the reasons why there is a GMP scheduling model in the Go language. I hope it will be helpful to you!

Recently, when I was reviewing the interface document, I found that the parameter defined by a small partner was an enumeration value, but the interface document did not give the corresponding specific enumeration value. In fact, how to write interface documents well is really important. Today, Brother Tianluo brings you 12 points to pay attention to in interface design documents~

In some low-level libraries, you often see the use of the unsafe package. This article will take you to understand the unsafe package in Golang, introduce the role of the unsafe package and how to use Pointer. I hope it will be helpful to you!

The file module is an encapsulation of underlying file operations, such as file reading/writing/opening/closing/delete adding, etc. The biggest feature of the file module is that all methods provide two versions of **synchronous** and **asynchronous**, with Methods with the sync suffix are all synchronization methods, and those without are all heterogeneous methods.

At the beginning, JS only ran on the browser side. It was easy to process Unicode-encoded strings, but it was difficult to process binary and non-Unicode-encoded strings. And binary is the lowest level data format of the computer, video/audio/program/network package
