-
Posts
768 -
Joined
-
Last visited
Content Type
Forums
Store
Crowdfunding
Applications
Events
Raffles
Community Map
Everything posted by TRS-80
-
Previous estimate I totally pulled from my rear end, because I have no idea. But sounds like maybe a little more often than I was thinking. Probably whatever system default is doing is likely fine, unless you really know what you are doing and/or have some reason for changing it?
-
Have you tried replacing the power supply? I can recommend Mean Well, they manufacture quality supplies at very reasonable prices. I use them for all my SBC (and most other electronics, too). Also, when you check any power supply, you need to do so under load. Like during startup, etc. As that is when you will see possible power dips. Of course be careful while doing so!
-
It seems to me like outreach efforts over the years (by people like @NicoD, etc.) have been putting us on the map. I also tried to help get the word out on last release announcement, which ended up being picked up / re-posted more places than ever before (including LWN, Tom's Hardware, r/linux, etc.) which I think probably helped, too.
-
Sounds way too frequent to me. Unless you are maybe constantly writing to it or something. In which case you should use something more robust like the sort of (industrial, expensive) hardware they recommend for ZFS cache. In normal usage I would think like twice a year (or quarterly, monthly?) maybe, I dunno. Why do you think you need to do it so often? Yes and a lot of info out there on the Internet about trimming SSD are out of date. In most cases all that is done automatically now. As you point out though, I am not sure how this works on flash devices.
-
Almost certainly not. For some more context, see the 'BSP, blobs, ‘vendor’ kernels and bootloaders' section of the recent release announcement. There is also some info in Armbian documentation. This is essentially/roughly the raison d'être for Armbian in the first place. Welcome aboard!
-
Anyone heard of this? I had not, until it was brought to my attention in a comment to the release announcement I sent to LWN. From https://validation.linaro.org/ : There is also https://www.lavasoftware.org/ : https://wiki-archive.linaro.org/LAVA : https://www.linaro.org/blog/lava-fundamentals/ https://github.com/Linaro/lava Anyway, plenty info in the Internet about it. I had a couple questions / thoughts: Any of you heard about / familiar with this? Is this suitable for our use-case?
-
@NicoD was looking for a video thumbnail just the other day in fact. Maybe you guys can eventually link up and do some collaboration? Besides other things mentioned. Thanks for reporting back. Don't worry, we all come and go. I disappear for weeks and months some times. We all do what we can. Cheers!
-
Eventually you figure this out, get it working, then you get curious about other devices, some time after that become contributor, women swoon and small children look up to you with admiration. Until then, good luck, you are going to need it! Actually, jokes aside, sounds not so insurmountable based on what you describe. Sorry if I can't help with any more than cheerleading.
-
This is not so much a 'Tutorial' as it is an overview (and/or starting point for further research) of some things to consider when setting up a file server, some in general and then some more particular to SBC. In the simplest case, you could attach almost any drive to almost any board, by any number of ways, and you have yourself 'a file server.' However with even a small amount of effort, there are so many better ways to do things nowadays. We have a lot more interesting hardware choices, advanced filesystems, etc. and so those are mostly what I want to talk about. But first, some context. Dedicated distros vs simple directory sharing Personally I am not a fan of dedicated distros like FreeNAS, OMV, etc. as I consider them to be too 'heavy weight' and single solution specific. I mean, if you need some/all of those features, or you like them or whatever, then great, use them. However, a 'file server' is one of the simplest use cases of a 'server' and I am sure one which almost everyone buying a SBC will do at some point. IMO, all one needs to do is simply share some folder(s) via either NFS (if your network is mostly GNU/Linux machines) or Samba/SMB (if your network is mostly Windows machines). And that's it, you're done! Simple, lightweight, and using standard protocols (well, a little less 'standard' in case of Windows, but I digress ). You could also add any of the software included in the 'all in one' solutions in first paragraph on a 'piece by piece' basis, if/when you need them. Personally I am more of a fan of starting small (vanilla Armbian base) and then adding things you need one by one. But you are of course free to do whatever you want. Setting any of that up, or debating about it, is considered off-topic for this thread, as there are tons of resources already around the Internet about those things. So I want to focus on other, (IMO) more interesting, aspects. You are welcome to make your own thread with a different focus, of course. Important note about hardware RAID RAID, in case you were unaware, means "Redundant Array of Independent Disks." For a looooooong time, I (and I think many others with SOHO NAS perspective) thought that 'hardware RAID was the only proper way to do it' but in the course of researching ZFS I eventually became convinced otherwise. In fact, apparently, historically 'big-iron' filesystems (Solaris, others) were always run with software RAID in the enterprise as there are a lot of advantages to that. Just think about it. If your hardware RAID card dies, you need to replace it with exactly the same model, before you can even begin to recover your data. With software RAID, you just re-connect drives and you are back in business (with ZFS, you can even connect them in any random order, amazing!). In particular if you are using ZFS, hardware RAID controllers are not recommended. I am not sure how much this applies to other similar software RAID systems, but it would stand to reason that it does (IMO). Hardware controllers are also typically very expensive and proprietary. Where software RAID uses readily available and inexpensive commodity hardware (while actually increasing reliability). All of which is why I have become such a big fan of it, whether we are talking about ZFS (which I like) or some other software RAID solution you may prefer instead. Now, with that out of the way, onward! Filesystem Assuming you came to similar conclusions as I did in previous section already, this is where our decision making process really begins in earnest IMO. As the way I see it, choice of file system will dictate topology, which will in turn dictate choice of hardware (which SBC to buy). Of course if you already bought some SBC, you are in a different boat (and working backwards, from my point of view ). Anyway, you will have to adapt accordingly (and/or buy different hardware). However, if you are already there, go to 'Historical Overview' section, and maybe you can find some solution there which will work with the hardware you already have. Now, back on track, we are really lucky nowadays, thanks to F/LOSS, to have some industrial grade, software based filesystems available to us mere mortals! I like ZFS, but certainly, opinions will differ. Some people like btrfs, in fact I got into a lengthy (but IMO informative) debate with @tkaiser about that. Then there are other distributed things like Ceph (and others) which I will admit to knowing less about. If you do, kindly add your thoughts to the thread when you have a moment. Point being, hardware setup for someone like me (using ZFS) or even someone using btrfs will be perhaps quite different than someone using something distributed like Ceph. Just keep that in mind. Historical Overview (SBC Hardware) I want to cover some of things which came before, as I think they are relevant to understanding where we are today. In the beginning, there was darkness. Wait, let me fast forward a bit. I will start with the creation of Armbian. Which happened, in fact, because Igor was trying to set up a file server on a cubietruck. True story (and like I said, you are not alone, this is a common use-case)! Anyway, many of us bought (A20 based) cubietruck back then, because they sold a small power adapter board which made this easy and they also had a SATA connector right on the board. I think they were one of first to do this in fact (I could be wrong). After that, a lot of us used (more powerful Samsung Exynos5 based octo-core) ODROID-XU4, along with a good quality USB to SATA adapter with UASP. That one in particular was recommended by tkaiser a long time ago, however -- as of time of writing this post -- PINE64 still sell it in their store(!). And it may be an 'okay' solution if you have some (especially, USB3 capable) board laying around that you just want to use. Some people are still using this today. But we have a lot better options now. Then RK3399 came on the scene. Which was very exciting, because these support not only PCIe, but they are also 64-bit (which is required for ZFS). Unfortunately, it took literally years for these to become really stable. Lucky for you, this is all in the past by now. And so, currently, many people will probably recommend RK3399 as a file (or other) server. There are a lot of other boards out there, which I do not have as much experience with. But I also try to note only historically significant developments here (at least the way I see them). If you have any to add, please comment below or contact me somehow (PM or IRC) and I will edit this post and add them. Usage (other than file server) As I already stated, almost any cheap board can be a (minimal) 'file server.' From that point of view, more powerful boards (like RK3399, or others) are overkill. However maybe you want better throughput. Or more disks. Or, like me, you want to run advanced filesystem like ZFS (also with more disks). Another thing which happens is that, once you handle the simple case of serving files, before you know it you will probably want to start doing more than that. You may already have some more demanding requirements in mind beforehand. For all these reasons, a slightly more powerful (and more expensive) board may be a better choice, as you will not be limited in the future. But if you are very sure about your needs and wants, and have perhaps other boards on your network to handle processing intensive tasks, this may not apply to you. Just something to consider. Hardware Recommendations (RK3399 based) For reasons mentioned at end of 'Historical Overview' section, many of us are fans of RK3399 based devices nowadays. Within this subset, you will get a lot of opinions. I know because I spent literally years asking people (including developers), while I was waiting for RK3399 to mature. Some developers think certain vendors make crappy hardware (and maybe they are right). Or they prefer the boards they know and develop on (naturally, IMO). Some vendors support the project a lot more than others (PINE64, notably, do not support Armbian at all to my knowledge). So you will have to do some amount of your own research, and form your own opinion here. I don't want to discount any of the above criteria too much. However, in the end, at least for me, one criteria bubbled to the top. And that was direct PCIe (or SATA) support. Since we are talking about RK3399 here, this means PCIe support. Which of course is in the SoC, but that is not what I am talking about. I am talking about putting a standard PCIe physical interface directly on the board. This way, you can buy any standard cheap/dumb PCIe to SATA adapter card. You can keep spares. You will be able to obtain them for the foreseeable future. By contrast, you can certainly rig up your own adapters, cables, etc. from GPIO. But personally, I find that fiddly and it doesn't strike me as the most reliable. I know some people are doing this just fine though. Another way, is that many vendors sell some sort of HAT, which provides SATA connectivity. However I find this short-sighted and -- quite frankly -- self-serving. What if they are out of stock, or stop making them? You are out of luck. Or you would need to revert to the above option (rigging up cables and adapters). It's good for them selling add-on hardware though I guess -- hence my 'self-serving' comment -- but maybe not the best for you and me in terms of keeping our important data available and up and running, on a long term basis. Therefore I say, skip all of that and go with a board that has a standard physical PCIe right on the board. ROCKPro64 And AFAIK, there is only one (standalone SBC) RK3399 that meets that description, which is ROCKPro64. Of course, I am not the first to come to same conclusion, in fact there is a whole thread where myself and some other like-minded folks discuss using ROCKPro64 as NAS. I say 'standalone SBC', because I almost forgot... Helios 64 These of course are no longer made, and Kobol, our once partners, have unfortunately closed up shop. However they still come up for sale on the forums from time to time (usually in Kobol Club (subforum is no more) but could be anywhere). Here you would be getting a more integrated solution, often including a nice case, for perhaps a bit more money. Some people have had problems with stability though, which I am not sure were ever fully worked out. As of mid-2024, it seems the community have finally got this device stable. Which I personally find pretty remarkable, given the circumstances. Having said that, I am sure these devices are only getting harder and harder to come by, especially now. Not only that, but keeping it that way is always going to require ongoing maintenance, as the Linux kernel (and other parts of the OS) never stop moving forward. Hardware Recommendations (not RK3399 based) ODROID HC4 ODROID HC4 was also brought up today, when this subject came up in IRC, by @rpardini as that is what he uses. This is an interesting alternative to the above, except Amlogic (S905X3) based (like a lot of TV boxes). Which to me means two things: 1. It almost certainly requires blobs. 2. It's probably cheaper than RK3399. Well in fairness ROCKPro64 probably requires some blobs, too. But somehow I get the impression that Amlogic are really obnoxious with their blobs, and I am not a fan of them as a company because of this. Surely a developer will need to correct me here. Anyway, I am not sure how it compares in processing power. I suspect less than RK3399(?). However for media transcoding, it may actually be better (perhaps hardware decoders(?), speculation on my part). Also for simple 'file server' use-case, in particular with Ceph or something like that, it becomes more interesting IMO. Again, quoting rpardini from IRC today: "scale out, not up." Future There are newer SoCs, like RK3566 and others. At this time, they are a lot like RK3399 was early on. Very exciting, but not able to realize their full potential due to lack of (software/Linux) development. The consensus among developers seems to be that RK3566 development should go faster/easier than RK3399 development, because of all the ground work that was laid in the course of the latter. Maybe that's true, but we are still talking about some time. Maybe less years, but still significant time. If/when this changes, I will try and remember to update the post. Feel free to ping me by posting, I should get notified. Others? Possibly anything in NAS category. Which I mostly covered already (above, in more detail) but can always change in future. Certainly I forgot (or don't know about) some others. I have shared what I learned from my research the last several years while looking for solution for myself. However I do want to make this a more general resource that we can all point to, so please discuss, and I will be happy to edit this post later on and add relevant info. To clarify, I am not talking about 'any SBC which can be made into a file server' which is almost any of them, but rather particular hardware which is interesting for some specific reason(s), especially currently (or perhaps historically, which I would add to that section). Cheers!
-
2
-
If I am interpreting above correctly, it sounds like there have been some hardware changes and those will need to be implemented. So until that gets done some things might not work.
-
I came across these recently and found several interesting things within, so thought I would share them. I debated posting this in Development, but wanted anyone to be able to see and reply to the thread. Anyway some parts I felt were important enough that I actually took down some notes and quotations, which I will include. Open drivers for Arm GPUs This looks like it was presentation LVC21-318 at Linaro Connect '21 on 2021-03-25. Video is available at above page, and also on ThemTube here. I found this a fascinating presentation. Alyssa contrasts the (proprietary) 'before times' with current situation which is apparently quite different. In fact, I was quite surprised to learn that: She then goes on to give 3 case studies, one of which was VideoCore, where I was more than a little surprised to learn that Broadcom at some point actually hired someone to work on their F/LOSS driver. Now this was only after that guy was well into his own (night and weekend) reverse engineering effort, followed by a lot of public pressure, but still. End result apparently is that this driver is now shipping in production RPi 4. Next case study was about Freedreno. I will quote my favorite part from here: Of course being firmly in the camp with the 'looney Free Software types' myself, this comment warmed my heart quite a bit. Finally she gives the Panfrost case study, where she points out that things are apparently moving away from the 'reverse-engineering underdog' model, and: And so reverse engineering is apparently no longer needed. Apparently this is the driver that ships in PineBook Pro for example. Of course I must admit to being more than a little bit (pleasantly!) surprised at all of this. And also starting to soften my stance towards Broadcom (just a little though, lol). If you are the least bit interested in any of this, I can highly recommend watching the full presentation. It's only 19 minutes long, anyway, but I personally was riveted the entire time. The Open Graphics Stack I think this presentation was given at Embedded Linux Conference '21 on 2021-09-29. So, about 6 months after the above. But honestly I came across it on ThemTube here. In the beginning, she makes a good point about the tension between embedded devices with perhaps 20 year life cycles, and devices with proprietary drivers which may be EoL in 5 years, and how this tension can be alleviated by using F/LOSS drivers. But what got my attention the most was: She then goes on about details of particular platforms (including x86, ARM, etc.) and then finally: Which is a quite bold (and again, pleasantly surprising) statement IMO, but then again I do consider her an authority on the subject. After that she goes into quite some detail about the nuts and bolts of all the various parts of the stack, which you may or may not be interested in watching as much.
-
3
-
OK, I was more thinking about maintaining the info there, like adding their own name as Maintainer, etc., rather than about generating images. So I will remind people about this as a part of onboarding (and encourage others to do the same if I am not around).
-
The discussion here got me thinking, I thought at some point one of responsibilities of Board Maintainer was to keep the download page for their board updated? So I double checked the job description, and that's not on there now. Was that a simple oversight? Or maybe we changed policy about handing out WordPress access to all these new people to the project?
-
Moved to P2P help. dd does not verify the written image, which is why that's not the recommended way. Please read Getting Started docs and try again.
-
https://www.armbian.com/newsflash/armbian-22-02-pig-release-announcement/ Please promote, share, etc. on social media!
-
rockpro64 eMMC armbian 21.08 Kernel 5.10.60 (buster)
TRS-80 replied to PointPubMedia's topic in Beginners
First and foremost, check power. Maybe you did not read all the way to the end of my previous post. Also, please answer questions I asked. -
rockpro64 eMMC armbian 21.08 Kernel 5.10.60 (buster)
TRS-80 replied to PointPubMedia's topic in Beginners
Well I only have (1) ROCKPro64, but I never had any such issues. Some other people have them as well and they are running reliable. I am starting to wonder if you got some bad eMMC. I would carefully test them all, one by one, using f3 as described here. Take your time and make notes. And do report back what you find. When you say 'testing' what do you mean? Moving them around? Or did you already test the flash with something like f3? Or ...? You probably don't want to be on old BSP kernel (and probably don't need to). I think that is wrong path / misdiagnosis. Actually, what are you using to power them? This could be lack of good power, too. Check those first, as they are most common (power and media). In fact make sure your power supply is good before testing the eMMC. -
Armbian 22.02 (Pig) Release Thread
TRS-80 replied to Heisath's topic in Armbian Project Administration
Good morning, guise! I made some major revisions all day yesterday after getting some helpful feedback from @rpardini. Which made it a little long again. With a good night's sleep now and a fresh cup of coffee, I will keep polishing it up / paring it back down right up until you forcibly yank it from my clutches, but it is mostly ready to go I think. -
I had started to remove the automatic generation of PDF from documentation, but now I am starting to wonder if that was actually a good idea. In case you did not know, in addition to https://docs.armbian.com/ you can also download a PDF copy of same from here. Maybe this is helpful for people who are offline while working on their SBC? Anyway, please speak up if you are using this! Related Issue: 96 and PRs: 176, 177.