Jump to content

Recommended Posts

Posted

(placeholder for a Board Bring Up thread -- now just collecting random nonsense)

 

RockPro64_without_WiFi.jpg

 

RockPro64_with_WiFi.jpg

 

Board arrived today. Running with ayufan's Debian 9 image (using 1.8/1.4GHz settings for the big.LITTLE cores for whatever reasons instead of 2.0/1.5GHz) I checked for heat dissipation with Pine Inc's huge heatsink first. Huge heatsink combined with thin thermal pad shows really nice thermal values:

 

  • below 50°C when running cpuburn-a53 on two A53 and two A72 cores in parallel with a fan blowing above the heatsink's surface
  • Board in upright positition still running same cpuburn-a53 task without fan after 20 minutes reports still below 85°C even if heatsink orientation is slightly wrong

 

Summary: passive cooling is all you need and you should really spend the additional few bucks on Pine Inc's well designed heatsink since being inexpensive and a great performer.

 

Posted

I really wonder why this board is sold without Heatsink.

Is the Design of this Heatsink unique or is it possible to use some Northbridge Heatsink or something similar?

Posted
13 hours ago, Da Alchemist said:

Is the Design of this Heatsink unique

 

Nope, every Northbridge cooler with a mounting hole distance of 60 mm will work.

 

Latest ayufan release from few hours ago activated USB so let's do a quick USB3 storage benchmark with an EVO840 in a JMS567 enclosure. Testing with our usual iozone benchmark:

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    19833    26941    35089    35464    24545    26763    
          102400      16    66374    85914   107740   108213    79784    86035    
          102400     512   308034   318723   298717   301950   293118   316910    
          102400    1024   335762   349205   335279   338808   330313   346358    
          102400   16384   379379   382513   378721   383947   383529   384295
         1024000   16384   397369   402562   384391   384846

Up to 400 MB/s as expected. Current ayufan images are yet not optimized for performance. They're missing increase in CPU clockspeeds (1.4/1.8 GHz instead of 1.5/2.0 GHz) and also do not use latest DRAM initialization (RockPro64 is the first RK3399 device using LPDDR4 so with appropriate DRAM initialization everything memory dependent -- e.g. I/O DMA -- will be faster afterwards)

 

I also did a quick iperf3 test in the meantime: +940 MBits/sec in both directions after ayufan adjusted TX/RX delay values:

 

macbookpro-tk:~ tk$ iperf3 -R -c 192.168.83.36
Connecting to host 192.168.83.36, port 5201
Reverse mode, remote host 192.168.83.36 is sending
[  4] local 192.168.83.32 port 52038 connected to 192.168.83.36 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   110 MBytes   923 Mbits/sec                  
[  4]   1.00-2.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   2.00-3.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   3.00-4.00   sec   112 MBytes   942 Mbits/sec                  
[  4]   4.00-5.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   5.00-6.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   6.00-7.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   7.00-8.00   sec   112 MBytes   941 Mbits/sec                  
[  4]   8.00-9.00   sec   112 MBytes   942 Mbits/sec                  
[  4]   9.00-10.00  sec   112 MBytes   941 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  1.10 GBytes   943 Mbits/sec    0             sender
[  4]   0.00-10.00  sec  1.09 GBytes   941 Mbits/sec                  receiver

iperf Done.
macbookpro-tk:~ tk$ iperf3 -c 192.168.83.36
Connecting to host 192.168.83.36, port 5201
[  4] local 192.168.83.32 port 52042 connected to 192.168.83.36 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   113 MBytes   946 Mbits/sec                  
[  4]   1.00-2.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   2.00-3.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   3.00-4.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   4.00-5.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   5.00-6.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   6.00-7.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   7.00-8.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   8.00-9.00   sec   112 MBytes   940 Mbits/sec                  
[  4]   9.00-10.00  sec   112 MBytes   940 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.10 GBytes   941 Mbits/sec                  receiver

iperf Done.

 

Posted

I'm eyeballing this board to use it for a cheap NAS solution using a PCIe expansion board to connect 2 (or more) SATA disks. No RAID setup or anything like that, just to store movies, tv shows, etc. I noticed you can order such an expansion board from the Pine64 guys (not sure if this is the best board though).

 

Anyway, I'm wondering if this is working at the moment (SATA expansion through PCIe) and if there are any benchmark results? FrankM writes on his website that this is not working. Maybe tkaiser has tested this as well? 

 

There is a new batch going to be available soon and I'm not sure if I should wait for a more mature build of the board or just buy it right away.

Posted
17 minutes ago, Yoast said:

I'm eyeballing this board to use it for a cheap NAS solution using a PCIe expansion board to connect 2 (or more) SATA disks. No RAID setup or anything like that, just to store movies, tv shows, etc. I noticed you can order such an expansion board from the Pine64 guys (not sure if this is the best board though).

 

Anyway, I'm wondering if this is working at the moment (SATA expansion through PCIe) and if there are any benchmark results? FrankM writes on his website that this is not working. Maybe tkaiser has tested this as well? 

 

There is a new batch going to be available soon and I'm not sure if I should wait for a more mature build of the board or just buy it right away.

I'd rather recommend ROCK64 with some USB3 multi-bay HDD enclosure. Since GbE is the bottleneck anyway, unless you've got something like Ethernet link aggregation, but that's beyond the topic of 'cheap' NAS.

Posted
16 hours ago, Yoast said:

I'm eyeballing this board to use it for a cheap NAS solution using a PCIe expansion board to connect 2 (or more) SATA disks. No RAID setup or anything like that, just to store movies, tv shows, etc. I noticed you can order such an expansion board from the Pine64 guys (not sure if this is the best board though).

 

Anyway, I'm wondering if this is working at the moment (SATA expansion through PCIe) and if there are any benchmark results? FrankM writes on his website that this is not working. Maybe tkaiser has tested this as well? 

 

There is a new batch going to be available soon and I'm not sure if I should wait for a more mature build of the board or just buy it right away.

 

Hi Yoast,

 

problem is an resistor. I will remove this on monday, so i hope to do some benchmarks next week. When pcie expansion board work ;)

Posted
On 6/1/2018 at 7:57 PM, firecb said:

I'd rather recommend ROCK64 with some USB3 multi-bay HDD enclosure. Since GbE is the bottleneck anyway, unless you've got something like Ethernet link aggregation, but that's beyond the topic of 'cheap' NAS.

 

Thanks for your message and suggestion firecb. I have a couple of 2.5" SATA disks laying around including the one I use now connected to my BananaPi. I was looking for a multi-bay enclosure, but couldn't find something. I did come across a lot of USB-to-SATA cables.

I'm also no big fan of connecting storage through USB. I never liked the receptacle and experienced some stability issues when I had a second USB drive connected to my BananaPi (not sure if it was because of the cable or connector).

You're right about the bottleneck :). Right now my bottleneck is the crappy speed on my BananaPi (~20 MB/s) so I would like to see that change to a GbE bottleneck :)

 

 

On 6/2/2018 at 12:18 PM, FrankM said:

 

Hi Yoast,

 

problem is an resistor. I will remove this on monday, so i hope to do some benchmarks next week. When pcie expansion board work ;)

 

Wow, removing a resistor. That's some hardcore tinkering, hope that'll work :).  Looking forward to seeing some benchmarks.

Posted

Ok, it works :)

 

rock64@rockpro64:~$ lspci
00:00.0 PCI bridge: Rockchip Inc. RK3399 PCI Express Root Port Device 0100
01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961

 

rock64@rockpro64:~$ sudo lspci -vvv
[sudo] password for rock64: 
00:00.0 PCI bridge: Rockchip Inc. RK3399 PCI Express Root Port Device 0100 (prog-if 00 [Normal decode])
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort+ <TAbort+ <MAbort+ >SERR+ <PERR+ INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 238
	Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
	I/O behind bridge: 00000000-00000fff
	Memory behind bridge: fa000000-fa0fffff
	Prefetchable memory behind bridge: 00000000-000fffff
	Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
	BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
		PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
	Capabilities: [80] Power Management version 3
		Flags: PMEClk- DSI- D1+ D2- AuxCurrent=0mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME+
	Capabilities: [90] MSI: Enable+ Count=1/1 Maskable+ 64bit+
		Address: 00000000fee30040  Data: 0000
		Masking: 00000000  Pending: 00000000
	Capabilities: [b0] MSI-X: Enable- Count=1 Masked-
		Vector table: BAR=0 offset=00000000
		PBA: BAR=0 offset=00000008
	Capabilities: [c0] Express (v2) Root Port (Slot+), MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0
			ExtTag- RBE+
		DevCtl:	Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us
			ClockPM- Surprise- LLActRep- BwNot+ ASPMOptComp+
		LnkCtl:	ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk-
			ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt+
		LnkSta:	Speed 5GT/s, Width x2, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
		SltCap:	AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
			Slot #0, PowerLimit 0.000W; Interlock- NoCompl-
		SltCtl:	Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
			Control: AttnInd Off, PwrInd Off, Power+ Interlock-
		SltSta:	Status: AttnBtn- PowerFlt- MRL+ CmdCplt- PresDet- Interlock-
			Changed: MRL- PresDet- LinkState-
		RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
		RootCap: CRSVisible-
		RootSta: PME ReqID 0000, PMEStatus- PMEPending-
		DevCap2: Completion Timeout: Range B, TimeoutDis+, LTR+, OBFF Via message ARIFwd+
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled ARIFwd-
		LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [274 v1] Transaction Processing Hints
		Interrupt vector mode supported
		Device specific mode supported
		Steering table in TPH capability structure
	Kernel driver in use: pcieport

01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 (prog-if 02 [NVM Express])
	Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 237
	Region 0: Memory at fa000000 (64-bit, non-prefetchable) [size=16K]
	Capabilities: [40] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
	Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [70] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 256 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
		DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
			MaxPayload 128 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s unlimited, L1 <64us
			ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
		LnkCtl:	ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk-
			ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x2, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Not Supported
		DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
		LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [b0] MSI-X: Enable+ Count=8 Masked-
		Vector table: BAR=0 offset=00003000
		PBA: BAR=0 offset=00002000
	Capabilities: [100 v2] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
		CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [148 v1] Device Serial Number 00-00-00-00-00-00-00-00
	Capabilities: [158 v1] Power Budgeting <?>
	Capabilities: [168 v1] #19
	Capabilities: [188 v1] Latency Tolerance Reporting
		Max snoop latency: 0ns
		Max no snoop latency: 0ns
	Capabilities: [190 v1] L1 PM Substates
		L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
			  PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
		L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
			   T_CommonMode=0us LTR1.2_Threshold=0ns
		L1SubCtl2: T_PwrOn=10us
	Kernel driver in use: nvme
rock64@rockpro64:/mnt$ sudo dd if=/dev/zero of=sd.img bs=1M count=4096 conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 12.6595 s, 339 MB/s

Atm only x2 :(

 

Used Kernel

 

rock64@rockpro64:~$ uname -a
 Linux rockpro64 4.4.126-rockchip-ayufan-257 #1 SMP Sun Jun 10 18:30:43 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux

 

Posted
On 6/1/2018 at 7:34 PM, Yoast said:

Anyway, I'm wondering if this is working at the moment (SATA expansion through PCIe) and if there are any benchmark results? FrankM writes on his website that this is not working. Maybe tkaiser has tested this as well? 

 

Yes, tkaiser has tested this already. On another RK3399 device: https://forum.armbian.com/topic/6496-odroid-n1-not-a-review-yet/?do=findComment&amp;comment=49400

 

And here you find a general storage performance overview also covering RK3399 devices: https://forum.armbian.com/topic/1925-some-storage-benchmarks-on-sbcs/?do=findComment&amp;comment=51350

 

 

ODROID-N1 in my tests was also running RK's 4.4 kernel, they use the same ASM1061 controller Pine folks sell as Add-On card, the only other performance relevant parameters are

  • CPU clockspeeds (these are simple tunables in the device tree to let the CPU cores clock slightly higher)
  • DRAM initialization/parameters (we should get a slight speed improve on RockPro64 in the future once LPDDR4 memory uses optimized settings)
  • Settings (e.g. PCIe power management active or not -- see the ODROID N1 thread above)

So everything is already known. :-)

 

Posted
10 hours ago, tkaiser said:

 

I've seen x4 so far only with mainline (4.16 back then): https://gist.github.com/kgoger/4a6a363d2999e512cd057148404a29dc (this is on Theobroma's Q7-RK3399 -- maybe different trace routing is also important?)

 

On NanoPC T4, it seems to be x4, but only 2.5GT/s, PCIe 1.0 speed.

 

root@hjc-nanopc:/home/hjc# lspci -vvv
00:00.0 PCI bridge: Device 1d87:0100 (prog-if 00 [Normal decode])
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort+ <TAbort+ <MAbort+ >SERR+ <PERR+ INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 230
        Bus: primary=00, secondary=01, subordinate=01, sec-latency=0
        I/O behind bridge: 00000000-00000fff
        Memory behind bridge: fa000000-fa0fffff
        Prefetchable memory behind bridge: 00000000-000fffff
        Secondary status: 66MHz- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- <SERR- <PERR-
        BridgeCtl: Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
        Capabilities: [80] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2- AuxCurrent=0mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME+
        Capabilities: [90] MSI: Enable+ Count=1/1 Maskable+ 64bit+
                Address: 00000000fee30040  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [b0] MSI-X: Enable- Count=1 Masked-
                Vector table: BAR=0 offset=00000000
                PBA: BAR=0 offset=00000008
        Capabilities: [c0] Express (v2) Root Port (Slot+), MSI 00
                DevCap: MaxPayload 256 bytes, PhantFunc 0
                        ExtTag- RBE+
                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed 2.5GT/s, Width x4, ASPM L1, Exit Latency L0s <256ns, L1 <8us
                        ClockPM- Surprise- LLActRep- BwNot+ ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk-
                        ExtSynch- ClockPM- AutWidDis- BWInt+ AutBWInt+
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
                SltCap: AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug- Surprise-
                        Slot #0, PowerLimit 0.000W; Interlock- NoCompl-
                SltCtl: Enable: AttnBtn- PwrFlt- MRL- PresDet- CmdCplt- HPIrq- LinkChg-
                        Control: AttnInd Off, PwrInd Off, Power+ Interlock-
                SltSta: Status: AttnBtn- PowerFlt- MRL+ CmdCplt- PresDet- Interlock-
                        Changed: MRL- PresDet- LinkState-
                RootCtl: ErrCorrectable- ErrNon-Fatal- ErrFatal- PMEIntEna+ CRSVisible-
                RootCap: CRSVisible-
                RootSta: PME ReqID 0000, PMEStatus- PMEPending-
                DevCap2: Completion Timeout: Range B, TimeoutDis+, LTR+, OBFF Via message ARIFwd+
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled ARIFwd-
                LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [274 v1] Transaction Processing Hints
                Interrupt vector mode supported
                Device specific mode supported
                Steering table in TPH capability structure
        Kernel driver in use: pcieport

01:00.0 Non-Volatile memory controller: Intel Corporation Device f1a5 (rev 03) (prog-if 02 [NVM Express])
        Subsystem: Intel Corporation Device 390a
        Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0
        Interrupt: pin A routed to IRQ 229
        Region 0: Memory at fa000000 (64-bit, non-prefetchable) [size=16K]
        Capabilities: [40] Power Management version 3
                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [70] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s unlimited, L1 unlimited
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
                        MaxPayload 128 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr+ TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x4, ASPM L1, Exit Latency L0s <1us, L1 <8us
                        ClockPM+ Surprise- LLActRep- BwNot- ASPMOptComp+
                LnkCtl: ASPM L1 Enabled; RCB 64 bytes Disabled- CommClk-
                        ExtSynch- ClockPM+ AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 2.5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR+, OBFF Via message
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -3.5dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [b0] MSI-X: Enable+ Count=16 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=0 offset=00002100
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [158 v1] #19
        Capabilities: [178 v1] Latency Tolerance Reporting
                Max snoop latency: 0ns
                Max no snoop latency: 0ns
        Capabilities: [180 v1] L1 PM Substates
                L1SubCap: PCI-PM_L1.2+ PCI-PM_L1.1+ ASPM_L1.2+ ASPM_L1.1+ L1_PM_Substates+
                          PortCommonModeRestoreTime=10us PortTPowerOnTime=10us
                L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1-
                           T_CommonMode=0us LTR1.2_Threshold=0ns
                L1SubCtl2: T_PwrOn=10us
        Kernel driver in use: nvme

kernel

root@hjc-nanopc:/home/hjc# uname -a
Linux hjc-nanopc 4.4.132-rk3399 #8 SMP Wed Jun 13 13:03:44 HKT 2018 aarch64 GNU/Linux

Somehow it seems to be capped at around 500MB/s, and has much lower 4k IOPS (Intel 600p 256G)

        Iozone: Performance Test of File I/O
                Version $Revision: 3.429 $
                Compiled for 64 bit mode.
                Build: linux

        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                     Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                     Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                     Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                     Vangel Bojaxhi, Ben England, Vikentsi Lapa.

        Run began: Wed Jun 13 05:29:26 2018

        Include fsync in write timing
        O_DIRECT feature enabled
        Auto Mode
        File size set to 102400 kB
        Record Size 4 kB
        Record Size 16 kB
        Record Size 512 kB
        Record Size 1024 kB
        Record Size 16384 kB
        Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
        Output is in kBytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 kBytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride             
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    46753    69928    44747    44325    23204    58273                                          
          102400      16   124122   169052    92119    91352    73088   106009                                          
          102400     512   420699   451433   389454   390496   390295   480645                                          
          102400    1024   465004   524627   464479   466948   466706   490374                                          
          102400   16384   507718   461558   546822   545124   524262   503565                                          

iozone test complete.

 

For comparison, here are results on Intel NUC7i5BNK

    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux-AMD64 

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa.

    Run began: Wed Jun 13 13:13:09 2018

    Include fsync in write timing
    O_DIRECT feature enabled
    Auto Mode
    File size set to 102400 kB
    Record Size 4 kB
    Record Size 16 kB
    Record Size 512 kB
    Record Size 1024 kB
    Record Size 16384 kB
    Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4   177910   195270    73608    74533    35979   175159                                                          
          102400      16   346863   395238   153802   156536   110934   382846                                                          
          102400     512   585384   597408   854478   873586   867914   590035                                                          
          102400    1024   588475   188750  1071468  1097910  1081047   594930                                                          
          102400   16384   490908   592000  1452265  1466850  1466042   584065                                                          

iozone test complete.

 

Posted
39 minutes ago, hjc said:

LnkCap: Port #0, Speed 2.5GT/s

 

You need to adjust DT on your NanoPC since the device claims to be only capable of PCIe Gen1. Please see also https://patchwork.kernel.org/patch/9345861/

 

@FrankM: Would be interesting to test on your RockPro64 with link speed set to 1 and see whether now a x4 link will be established: https://github.com/ayufan-rock64/linux-kernel/blob/65dce4f6760180105cda50d6f8d603e25eaf26fc/arch/arm64/boot/dts/rockchip/rk3399-rockpro64.dts#L260

Posted
On 5/25/2018 at 11:51 PM, tkaiser said:

Latest ayufan release from few hours ago activated USB so let's do a quick USB3 storage benchmark with an EVO840 in a JMS567 enclosure. Testing with our usual iozone benchmark:


                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    19833    26941    35089    35464    24545    26763    
          102400      16    66374    85914   107740   108213    79784    86035    
          102400     512   308034   318723   298717   301950   293118   316910    
          102400    1024   335762   349205   335279   338808   330313   346358    
          102400   16384   379379   382513   378721   383947   383529   384295
         1024000   16384   397369   402562   384391   384846

Up to 400 MB/s as expected.

 

In the meantime I ordered an USB-C drive enclosure based on VIA VL716. A quick check with lsusb shows 'Bus 008 Device 002: ID 2109:0715 VIA Labs, Inc.' (that's the same product ID as VIA's VL715 which is basically the same chip as the VL716 but the latter contains the USB-C stuff). My first test was disappointing since performance looked like USB2 and a quick 'lsusb -t' confirmed. The enclosure only established a Hi-Speed but no SuperSpeed connection. After replugging the cable it worked one time but then I got connection issues (that looked like UAS software troubles as usual in the logs).

 

Now after a reboot it looks like this:

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    15833    21863    24630    24740    15728    17119
          102400      16    41332    57594    63793    63974    53032    57263
          102400     512   236008   246826   232867   238458   231778   231332
          102400    1024   282755   296478   280089   282768   279200   295004
          102400   16384   377360   377555   388708   396527   395120   383106

Well, numbers look ok but I've to admit that USB storage kinda sucks. 'Armbianmonitor -u' output: http://ix.io/1dbY

Posted
16 minutes ago, tkaiser said:

You need to adjust DT on your NanoPC since the device claims to be only capable of PCIe Gen1. Please see also https://patchwork.kernel.org/patch/9345861/

 

With DT modified, I got 5GT/s x4 working. (lspci shows LnkCap/LnkSta is x4, but I suspect that's still only x2, see results below)

Now the disk performance is a lot better, though it's still not as fast as that on Intel platform. Tested with scaling governor set to "performance", pinned to core 4-5 using taskset.

                                                              random    random     bkwd    record    stride             
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    79329   110933    67335    67356    34010   109116                                          
          102400      16   237860   300516   144350   144549   104765   294656                                          
          102400     512   576148   566193   739793   743689   742523   577145                                          
          102400    1024   566945   565336   905777   914270   912475   575646                                          
          102400   16384   551399   563214  1032516  1041588  1042083   553401

 

Posted
3 minutes ago, hjc said:

Now the disk performance is a lot better, though it's still not as fast as that on Intel platform. Tested with scaling governor set to "performance", pinned to core 4-5 using taskset.

 

You should keep in mind that your SSD wants (and gets on Intel) PCIe Gen3 ('LnkCap: Port #0, Speed 8GT/s, Width x4') and that RK's 4.4 kernel allows for PCIe power management (defaulting to 'powersave'). I would assume retesting with adjusted /sys/module/pcie_aspm/parameters/policy sysfs node won't change results with larger blocksizes and that's most probably since your SSD wants Gen3 link speed to perform great (seen this several times in reviews that even if the theoretical max bandwidth of a Gen2 x4 link would be more than sufficient the same SSD performs faster on a Gen3 x2 link which would allow for the same transfer rates)

Posted
6 hours ago, tkaiser said:

 

Yes, tkaiser has tested this already. On another RK3399 device: https://forum.armbian.com/topic/6496-odroid-n1-not-a-review-yet/?do=findComment&amp;comment=49400

 

And here you find a general storage performance overview also covering RK3399 devices: https://forum.armbian.com/topic/1925-some-storage-benchmarks-on-sbcs/?do=findComment&amp;comment=51350

 

 

ODROID-N1 in my tests was also running RK's 4.4 kernel, they use the same ASM1061 controller Pine folks sell as Add-On card, the only other performance relevant parameters are

  • CPU clockspeeds (these are simple tunables in the device tree to let the CPU cores clock slightly higher)
  • DRAM initialization/parameters (we should get a slight speed improve on RockPro64 in the future once LPDDR4 memory uses optimized settings)
  • Settings (e.g. PCIe power management active or not -- see the ODROID N1 thread above)

So everything is already known. :-)

 

:) thanks for the information! I'm liking the numbers you showed for the purpose I have in mind. It looks like those numbers will be similar (or even slightly better) for the RP64. Right now I'm hoping to start seeing reports of a working add-on card without the need to remove resistors. I'm not sure if the issues reported are solely software based...

Posted

I am wondering if there will be an Armbian image for the RockPRO64.

I intend to make an application using the RockPRO64ai version, which should be released for sale in August, and I really like using Armbian in my projects.

How does Armbian support for a new board work? Do board manufacturers commit to keeping upgrades?

Or is the work done only by the community without direct relationship with the manufacturer?

Posted
8 hours ago, Alexandre Ignacio Barboza said:

How does Armbian support for a new board work?

Create a bootable (but not optimized) headless Armbian image for a new Rockchip-based board is not difficult, if you are familiar with both Rockchip's boot process and Armbian build scripts. I've been using Rockchip-based boards since last year, and recently only spent a few hours to create images for my Firefly-RK3399 and NanoPC-T4, both legacy 4.4 and mainline 4.17 kernel.

Rockchip provides relatively better BSP and upstream support (compared to AllWinner), sometimes they just need to be put together properly. But board vendors are not always willing to do the job, many of them just use an old u-boot (2014.x) and the "Android way"  to load the kernel. (i.e. pack kernel into an .img, flash them to a fixed offset on emmc/sd, load them into memory so that u-boot does not even need fs drivers. At least Firefly and NanoPC T4 have official "Linux" images in this form)

 

8 hours ago, Alexandre Ignacio Barboza said:

Do board manufacturers commit to keeping upgrades? 

Board/SoC vendors maintain u-boot/kernel sources, Armbian create build scripts to pack them up, with patches and optimizations.

Posted

From my point of view, the RockPro is a poor performer at the Moment. The Rk3399 is not new, it is available for over two years now. There are TVBoxes and Chromebooks working with it, but on Rockpro it is not possible to run a minimal Server with BSP Kernel idle without crashing. Even Overvoltage does not bring more stability. Regarding the resistor, that has to be removed to get PCIe partially running, I doubt that there are only Software Issues have to be resolved to get this Board running.

Perhaps there is a reason why competitors boards are more expensive. I did not expect to see this board running fully supported after a few weeks, but i did not expect such instability, even not in the beginning.

Keeping in mind ayufans comments, that some things are working on prerelease boards, i wonder why the guys at Pine64 changed so many hardwaredetails for the boards that were going on sale.

 

For now i would not recommend to buy this board, the board is on early stage of development, not only the software.

 

 

Posted
2 hours ago, Da Alchemist said:

Perhaps there is a reason why competitors boards are more expensive

The reason is that, those more expensive RK3399 boards have eMMC and Wi-Fi/BT module, or even heat sink and power supply included. In Pine64 store, these components have an extra cost of $15.95 (16G eMMC) + $15.99 (WiFi/BT) + $3.49 (heat sink) + $8.99 (power supply) = $44.42.

 

If you compare carefully between NanoPC T4 ($129 = 4GB RAM + all above components included) and RockPro64 ($79.99 + $44.42 = $124.41), you will find that RockPro64 is not a lot cheaper. It's just eliminating those component that you may not use at all, or those you may already have one and don't need another (e.g. power supply).

 

 

 

Posted
On 16 юни 2018 at 1:48 PM, Da Alchemist said:

Keeping in mind ayufans comments, that some things are working on prerelease boards, i wonder why the guys at Pine64 changed so many hardwaredetails for the boards that were going on sale.

 

For now i would not recommend to buy this board, the board is on early stage of development, not only the software.

I think that the hardware development is already complete, unlike the software development, which has only recently started and is progressing quickly (a new pre-release almost every day).

 

As to the hardware revisions, the difference between V1.0 and V2.0 is quite large. The difference between V2.0 and the July 2018 batch is only 1 omitted resistor (R895381), which was probably not needed in the first place. So the bare PCB can be the same as for V2.0. In general, the number of hardware revisions of a product is much lower than the number of software revisions. This is natural.

 

But what's interesting to me is how the price of RockPro64, Orange Pi RK3399, and other RK3399-based boards like the NanoPC T4 can be so low. As many of you probably don't know, the Chinese government subsidises Shenzhen Xunlong Software Company (the maker of the Orange Pi boards), and so they can afford to sell their boards at the BOM (Bill of Materials) cost, not following the "golden rule" that the retail price of a product must be 3 times higher than the BOM cost to pay salaries, depreciation allowances, etc. But their RK3399 board has only 2 GB of DRAM and 16 GB of eMMC flash memory and sells for $109. The 2 GB RockPro64 variant plus 16 GB eMMC sells for even less: just $75. How is this possible? Does the Chinese government subsidise Pine Microsystems as well? Unlikely! Then, what is the secret behind their astonishingly low prices?

Posted
10 minutes ago, lucho said:

As many of you probably don't know, the Chinese government subsidises Shenzhen Xunlong Software Company (the maker of the Orange Pi boards),

 

See here: https://www.cnx-software.com/2017/07/15/office-factory-business-model-and-ambitious-plans-of-shenzhen-xunlong-software-orange-pi-maker/

Quote

3. About our cost and subsidies from government, yes, we sell most of our product at BOM cost, but we also have some models that still have margin profits, like PC Plus, Plus 2. If we keep running behind our expenses, how could government could support us? Government would not support you, if you have no income, they could only support you parts of the cost, like engineering cost, or part or office renting cost. So most of the cost should cover by ourself, we need to figure out how to how to sell more with margin profit. As you see, we do it now.

 

16 minutes ago, lucho said:

The 2 GB RockPro64 variant plus 16 GB eMMC sells for even less: just $75.

The RockPro64 is IMO a 'minimal' SBC (to be fair here you had to add another ~15$ for the 2x2 MIMO wifi and some bucks for the tc358749x - a HDMI MIPI-CSI bridge IC). Whereas Xunlongs board is somehow a 'exotic' configuration (e.g. HDMI-IN, HALL, ALC5621 - for audio related stuff)..  Xunlongs board is IMO an evaluation kit somehow similar to the firefly. They may all go to the limit to be cheaper than the competitors. The fact that the pine boards first batch was sold out on the first day might be part of the price calculation. ;) 

 

For server use-cases the RockPro64 in the 2GB configuration seems to be a really nice board (for such a use-case, IMO you don't need wifi - or a really powerful one which is likely not SDIO based, you don't need all those sensors populated etc.). I think pine built a really nice board which can be upgraded (eMMC socket, SDIO wifi 'socket', SPI flash, PCIe).  Let's see how RKs NPU will perform as soon as the latest RockPro is available (could be funny :P).

 

On 6/14/2018 at 7:20 AM, hjc said:

But board vendors are not always willing to do the job, many of them just use an old u-boot (2014.x) and the "Android way"  to load the kernel.

Those old u-boot versions are always annoying (I had some 'fun' with the MT7623 with a 2014 based u-boot). Do you know how mature 'mainline u-boot' for the RK3399 is? You still use the old one for the firefly, hopefully we can bring all those boards up with a recent u-boot together with some bits here and there to initialize DRAM properly for each board... 

Posted
1 minute ago, chwe said:

 

On 6/14/2018 at 1:20 PM, hjc said:

But board vendors are not always willing to do the job, many of them just use an old u-boot (2014.x) and the "Android way"  to load the kernel.

Those old u-boot versions are always annoying (I had some 'fun' with the MT7623 with a 2014 based u-boot). Do you know how mature 'mainline u-boot' for the RK3399 is? You still use the old one for the firefly, hopefully we can bring all those boards up with a recent u-boot together with some bits here and there to initialize DRAM properly for each board... 

RK3399 is supported by mainline u-boot.

 

I changed BOOTSOURCE to https://github.com/u-boot/u-boot.git and checked it on Firefly-RK3399. It runs pretty well, even better than Rockchip u-boot (2017.x). It's running with good USB & ethernet support, booting OS flawlessly.

I'm already running some heavy services on NanoPC-T4 so not rebooting them, but I assume it works basically the same.

 

Playing around with u-boot:

DDR Version 1.12 20180518
In
Channel 0: DDR3, 800MHz
Bus Width=32 Col=10 Bank=8 Row=15/15 CS=2 Die Bus-Width=16 Size=2048MB
Channel 1: DDR3, 800MHz
Bus Width=32 Col=10 Bank=8 Row=15/15 CS=2 Die Bus-Width=16 Size=2048MB
256B stride
ch 0 ddrconfig = 0x101, ddrsize = 0x2020
ch 1 ddrconfig = 0x101, ddrsize = 0x2020
pmugrf_os_reg[2] = 0x3AA17AA1, stride = 0xD
OUT
Boot1: 2018-04-08, version: 1.12
CPUId = 0x0
ChipType = 0x10, 217
SdmmcInit=2 0
BootCapSize=100000
UserCapSize=29820MB
FwPartOffset=2000 , 100000
mmc0:cmd5,32
SdmmcInit=0 0
BootCapSize=0
UserCapSize=15193MB
FwPartOffset=2000 , 0
StorageInit ok = 80087
LoadTrustBL
No find bl30.bin
No find bl32.bin
Load uboot, ReadLba = 2000
Load OK, addr=0x200000, size=0x90f40
RunBL31 0x10000
NOTICE:  BL31: v1.3(debug):f947c7e
NOTICE:  BL31: Built : 19:40:58, Jun 11 2018
NOTICE:  BL31: Rockchip release version: v1.1
INFO:    GICv3 with legacy support detected. ARM GICV3 driver initialized in EL3
INFO:    plat_rockchip_pmu_init(1089): pd status 3e
INFO:    BL31: Initializing runtime services
INFO:    BL31: Preparing for EL3 exit to normal world
INFO:    Entry point address = 0x200000
INFO:    SPSR = 0x3c9


U-Boot 2018.07-rc1-armbian (Jun 19 2018 - 00:14:04 +0800)

Model: Firefly-RK3399 Board
DRAM:  3.9 GiB
MMC:   dwmmc@fe320000: 1, sdhci@fe330000: 0
Loading Environment from MMC... OK
In:    serial@ff1a0000
Out:   serial@ff1a0000
Err:   serial@ff1a0000
Model: Firefly-RK3399 Board
Net:   eth0: ethernet@fe300000
Hit any key to stop autoboot:  0
=> 
=> 
=> usb start
starting USB...
USB0:   USB EHCI 1.00
USB1:   USB EHCI 1.00
scanning bus 0 for devices... 1 USB Device(s) found
scanning bus 1 for devices... 3 USB Device(s) found
       scanning usb for storage devices... 1 Storage Device(s) found
=> usb info
1: Hub,  USB Revision 2.0
 - u-boot EHCI Host Controller 
 - Class: Hub
 - PacketSize: 64  Configurations: 1
 - Vendor: 0x0000  Product 0x0000 Version 1.0
   Configuration: 1
   - Interfaces: 1 Self Powered 0mA
     Interface: 0
     - Alternate Setting 0, Endpoints: 1
     - Class Hub
     - Endpoint 1 In Interrupt MaxPacket 8 Interval 255ms

1: Hub,  USB Revision 2.0
 - u-boot EHCI Host Controller 
 - Class: Hub
 - PacketSize: 64  Configurations: 1
 - Vendor: 0x0000  Product 0x0000 Version 1.0
   Configuration: 1
   - Interfaces: 1 Self Powered 0mA
     Interface: 0
     - Alternate Setting 0, Endpoints: 1
     - Class Hub
     - Endpoint 1 In Interrupt MaxPacket 8 Interval 255ms

2: Hub,  USB Revision 2.0
 -  USB 2.0 Hub [MTT] 
 - Class: Hub
 - PacketSize: 64  Configurations: 1
 - Vendor: 0x1a40  Product 0x0201 Version 1.0
   Configuration: 1
   - Interfaces: 1 Self Powered Remote Wakeup 100mA
     Interface: 0
     - Alternate Setting 0, Endpoints: 1
     - Class Hub
     - Endpoint 1 In Interrupt MaxPacket 1 Interval 12ms
     - Endpoint 1 In Interrupt MaxPacket 1 Interval 12ms

3: Mass Storage,  USB Revision 2.10
 - SanDisk Ultra USB 3.0 4C530001240422114145
 - Class: (from Interface) Mass Storage
 - PacketSize: 64  Configurations: 1
 - Vendor: 0x0781  Product 0x5595 Version 1.0
   Configuration: 1
   - Interfaces: 1 Bus Powered 224mA
     Interface: 0
     - Alternate Setting 0, Endpoints: 2
     - Class Mass Storage, Transp. SCSI, Bulk only
     - Endpoint 1 In Bulk MaxPacket 512
     - Endpoint 2 Out Bulk MaxPacket 512
=> dhcp
ethernet@fe300000 Waiting for PHY auto negotiation to complete........ done
Speed: 1000, full duplex
BOOTP broadcast 1
BOOTP broadcast 2
BOOTP broadcast 3
BOOTP broadcast 4
BOOTP broadcast 5

Abort
=> 

 

Posted

Already available: The official NAS enclosure which opens up plenty of opportunities for specific server and NAS use cases:

 

NAS-Enclosure.jpg

 

Please check the product page for details (e.g. what's included and what you need to purchase separately)

 

My understanding is that the drive cage can accomodate 2 x 3.5" disks and 2 x 2.5" disks though you can only use 2 of them at the same time with Pine's own SATA card (relying on an ASM1061 -- check performance numbers here, the tests were made on an ODROID-N1 which uses same SoC and SATA chip). The ASM1061 only provides 2 SATA ports (so eSATA and SATA can't be used at the same time) and is attached with just a single PCIe lane which is fine when connecting spinning rust but will already bottleneck two SSDs since PCIe will limit overall bandwidth then.

 

Performance fanatics might skip Pine's cheap ASM1061 card and search for a PCIe card based on either ASM1062 or Marvell's 88SE9230 (both using 2 PCIe 2.x lanes, the latter providing 4 SATA ports). If it's just about attaching 4 SATA devices cards based on Marvell 88SE9215 will also do it (for performance numbers see here)

 

External powering with one single 12V PSU (it's the common barrel plug with 5.5/2.1mm, centre positive) is sufficient. For 2 3.5" disks you should choose 5A.

 

If you want to add Wi-Fi to this setup you most probably need to drill 2 holes to attach good external antennas (recommended since Pine's Wi-Fi module is a pretty decent one: AP6359SA is dual-band and dual-antenna and even RSDB capable which allows to use both bands at the same time so providing more bandwidth but especially lower latency if the AP also supports it)

Posted

Hmm this thingie will allow a bunch of fancy SSD/HDD mixed setups (and from what I understand, and that's not much, ZFS seems to support such configurations relatively well). From what I see, they even took care for a proper mounting for PCI cards. I'm curious how well the cooling with one big fan works if you've two 2.5'' and two 3.5'' disks inside. From what I know, the RockPro 64 has two 5V rails and one 12V rail for powering disks, someone ever checked how much current you can get out of them (e.g. when you plan to run a 4 disk setup, may voltage drop be an issue even when you take care of a sufficient PSU, means is there enough copper on the PCB?). At least Pine seems to have good ideas how you could use their SBCs in a useful manner. :) 

Posted

My July batch board arrived a couple of days ago, what process are people following to build a bootable image? I'd like to run some tests with an Intel i350-T4 and a Mellanox 10Gb card.

Posted
11 hours ago, Arglebargle said:

what process are people following to build a bootable image?

 

For testing purposes I always just grab a bionic or stretch minimal image from ayufan, then look for latest mainline kernel available and install it:

apt-cache search kernel | grep -- '-4\.1'
apt install linux-image-4.18.0-rc5-1048-ayufan-g69e417fe38cf
reboot

Mainline kernel was a requirement to use PCIe in the past but according to release notes that might have changed recently (by disabling SDIO). So you might want to test with both 4.4 and mainline...

 

Edit: Don't forget to always check /proc/interrupts for the PCIe related IRQs and assign them to one of the big cores.

Posted
10 hours ago, tkaiser said:

 

For testing purposes I always just grab a bionic or stretch minimal image from ayufan, then look for latest mainline kernel available and install it:


apt-cache search kernel | grep -- '-4\.1'
apt install linux-image-4.18.0-rc5-1048-ayufan-g69e417fe38cf
reboot

Mainline kernel was a requirement to use PCIe in the past but according to release notes that might have changed recently (by disabling SDIO). So you might want to test with both 4.4 and mainline...

 

Thanks! Pcie seems to be working fine on the most recent 4.4 kernel, the Intel nic I tested linked at 5GT 4x without issue.

 

root@rockpro64:~# uname -a
Linux rockpro64 4.4.132-1075-rockchip-ayufan-ga83beded8524 #1 SMP Thu Jul 26 08:22:22 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux

 

LnkSta: Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

 

Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

Terms of Use - Privacy Policy - Guidelines