Instead of 7000MB per second Samsung specification and also I read on forum, my Samsung 990 Pro 1T SSD, M.2, NVMe, PCIe Gen4, 4 lanes, varies at 3xxx to 4xxx MB/sec.
New ASUS motherboard with B760 chip set, capable at PCIe gen 5, but the SSD is Gen4, 4 lanes.
top shows very little CPU and memory usage. hdparm gave different speed between runs. This is strange as RPi 5 is quite constant speed. Intel i5 14500 CPU with internal video.
sudo lspci -vvv shows the correct 16GT/second speed and 4 lanes. On a separate RPi 5 with Samsung 980 pro, the speed is correct (and marked degraded) at 5GT/sec and 1 lane
New ASUS motherboard with B760 chip set, capable at PCIe gen 5, but the SSD is Gen4, 4 lanes.
top shows very little CPU and memory usage. hdparm gave different speed between runs. This is strange as RPi 5 is quite constant speed. Intel i5 14500 CPU with internal video.
sudo lspci -vvv shows the correct 16GT/second speed and 4 lanes. On a separate RPi 5 with Samsung 980 pro, the speed is correct (and marked degraded) at 5GT/sec and 1 lane
Code:
$ sudo hdparm -t --direct /dev/nvme0n1/dev/nvme0n1: Timing O_DIRECT disk reads: 13180 MB in 3.00 seconds = 4393.20 MB/sec Timing O_DIRECT disk reads: 9488 MB in 3.00 seconds = 3162.25 MB/sec----$lsb_release -a Description:Debian GNU/Linux 12 (bookworm)----$uname -r6.1.0-30-amd64
Code:
sudo lspci -vvv01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal] (prog-if 02 [NVM Express])Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller S4LV008[Pascal]Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-Latency: 0, Cache Line Size: 64 bytesInterrupt: pin A routed to IRQ 16IOMMU group: 15Region 0: Memory at 86000000 (64-bit, non-prefetchable) [size=16K]Capabilities: [40] Power Management version 3Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+Address: 0000000000000000 Data: 0000Capabilities: [70] Express (v2) Endpoint, MSI 00DevCap:MaxPayload 512 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimitedExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 75WDevCtl:CorrErr+ NonFatalErr+ FatalErr+ UnsupReq+RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-MaxPayload 256 bytes, MaxReadReq 512 bytesDevSta:CorrErr- NonFatalErr- FatalErr- UnsupReq- AuxPwr- TransPend-LnkCap:Port #0, Speed 16GT/s, Width x4, ASPM L1, Exit Latency L1 <64usClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+LnkCtl:ASPM Disabled; RCB 64 bytes, Disabled- CommClk+ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-LnkSta:Speed 16GT/s, Width x4[[[this is Rpi 5, Pcie gen2 at 400MB/s , 1 lane, LnkSta:Speed 5GT/s (downgraded), Width x1 (downgraded)]]]TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-DevCap2: Completion Timeout: Range ABCD, TimeoutDis+ NROPrPrP- LTR+ 10BitTagComp+ 10BitTagReq- OBFF Not Supported, ExtFmt- EETLPPrefix- EmergencyPowerReduction Not Supported, EmergencyPowerReductionInit- FRS- TPHComp- ExtTPHComp- AtomicOpsCap: 32bit- 64bit- 128bitCAS-DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis- LTR+ 10BitTagReq- OBFF Disabled, AtomicOpsCtl: ReqEn-LnkCap2: Supported Link Speeds: 2.5-16GT/s, Crosslink- Retimer+ 2Retimers+ DRS-LnkCtl2: Target Link Speed: 16GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance Preset/De-emphasis: -6dB de-emphasis, 0dB preshootLnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+ EqualizationPhase1+ EqualizationPhase2+ EqualizationPhase3+ LinkEqualizationRequest- Retimer- 2Retimers- CrosslinkRes: Upstream PortCapabilities: [b0] MSI-X: Enable+ Count=17 Masked-Vector table: BAR=0 offset=00003000PBA: BAR=0 offset=00002000Capabilities: [100 v2] Advanced Error ReportingUESta:DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-UEMsk:DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-UESvrt:DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-CESta:RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr-CEMsk:RxErr- BadTLP- BadDLLP- Rollover- Timeout- AdvNonFatalErr+AERCap:First Error Pointer: 00, ECRCGenCap+ ECRCGenEn- ECRCChkCap+ ECRCChkEn-MultHdrRecCap+ MultHdrRecEn- TLPPfxPres- HdrLogCap-HeaderLog: 00000000 00000000 00000000 00000000Capabilities: [168 v1] Secondary PCI ExpressLnkCtl3: LnkEquIntrruptEn- PerformEqu-LaneErrStat: 0Capabilities: [188 v1] Physical Layer 16.0 GT/s <?>Capabilities: [1ac v1] Lane Margining at the Receiver <?>Capabilities: [1c4 v1] Latency Tolerance ReportingMax snoop latency: 15728640nsMax no snoop latency: 15728640nsCapabilities: [1cc v1] L1 PM SubstatesL1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+ PortCommonModeRestoreTime=10us PortTPowerOnTime=10usL1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=30720nsL1SubCtl2: T_PwrOn=10usCapabilities: [350 v1] Data Link Feature <?>Kernel driver in use: nvmeKernel modules: nvme
Statistics: Posted by sam71623 — 2025-02-04 16:45 — Replies 0 — Views 41