Bino Oetomo Posted October 31, 2015 Posted October 31, 2015 Dear All ...My board is cubie2.Let's say I want to move led pins to pins other than PH20-PH21.In 3.x kernel , all we need to do is edit the fex file and compile it to script.bin.The edited parts is (from PH20/PH21 to PE10/PE11) : [gpio_para] gpio_used = 1 gpio_num = 8 gpio_pin_1 = port:PE04<3><default><default><1> gpio_pin_2 = port:PE05<3><default><default><1> gpio_pin_3 = port:PE06<3><default><default><1> gpio_pin_4 = port:PE07<3><default><default><1> gpio_pin_5 = port:PE08<3><default><default><1> gpio_pin_6 = port:PE09<3><default><default><1> gpio_pin_7 = port:PE10<3><default><default><1> gpio_pin_8 = port:PE11<3><default><default><1> [leds_para] leds_used = 1 leds_num = 2 leds_pin_1 = port:PH20<1><default><default><0> leds_name_1 = "green:ph20:led1" leds_default_1 = 1 leds_pin_2 = port:PH21<1><default><default><0> leds_name_2 = "blue:ph21:led2" leds_default_2 = 0 leds_trigger_2 = "heartbeat" become : [gpio_para] gpio_used = 1 gpio_num = 6 gpio_pin_1 = port:PE04<3><default><default><1> gpio_pin_2 = port:PE05<3><default><default><1> gpio_pin_3 = port:PE06<3><default><default><1> gpio_pin_4 = port:PE07<3><default><default><1> gpio_pin_5 = port:PE08<3><default><default><1> gpio_pin_6 = port:PE09<3><default><default><1> [leds_para] leds_used = 1 leds_num = 2 leds_pin_1 = port:PE10<1><default><default><0> leds_name_1 = "green:PE10:led1" leds_default_1 = 1 leds_pin_2 = port:PE11<1><default><default><0> leds_name_2 = "blue:PE11:led2" leds_default_2 = 0 leds_trigger_2 = "heartbeat" My question is , how to do it with vanila (4.x) kernels ?Sincerely-bino-
zador.blood.stained Posted October 31, 2015 Posted October 31, 2015 I think that in sun7i-a20-cubieboard2.dts you need to change leds { compatible = "gpio-leds"; pinctrl-names = "default"; pinctrl-0 = <&led_pins_cubieboard2>; blue { label = "cubieboard2:blue:usr"; gpios = <&pio 7 21 GPIO_ACTIVE_HIGH>; }; green { label = "cubieboard2:green:usr"; gpios = <&pio 7 20 GPIO_ACTIVE_HIGH>; }; }; to leds { compatible = "gpio-leds"; pinctrl-names = "default"; pinctrl-0 = <&led_pins_cubieboard2>; blue { label = "cubieboard2:blue:usr"; gpios = <&pio 4 11 GPIO_ACTIVE_HIGH>; }; green { label = "cubieboard2:green:usr"; gpios = <&pio 4 10 GPIO_ACTIVE_HIGH>; }; }; and led_pins_cubieboard2: led_pins@0 { allwinner,pins = "PH20", "PH21"; allwinner,function = "gpio_out"; allwinner,drive = <SUN4I_PINCTRL_10_MA>; allwinner,pull = <SUN4I_PINCTRL_NO_PULL>; }; to led_pins_cubieboard2: led_pins@0 { allwinner,pins = "PE10", "PE11"; allwinner,function = "gpio_out"; allwinner,drive = <SUN4I_PINCTRL_10_MA>; allwinner,pull = <SUN4I_PINCTRL_NO_PULL>; }; You'll need to recompile kernel (or at least device tree part) with build script to get new sun7i-a20-cubieboard2.dtb file. Alternatively you can try to decompile, change and recompile dtb file with dtc from device-tree-compiler package, but I'm not sure if it will work. 1
Bino Oetomo Posted November 1, 2015 Author Posted November 1, 2015 Dear Sir I think that in sun7i-a20-cubieboard2.dts you need to changeYou'll need to recompile kernel (or at least device tree part) with build script to get new sun7i-a20-cubieboard2.dtb file. Alternatively you can try to decompile, change and recompile dtb file with dtc from device-tree-compiler package, but I'm not sure if it will work. I realy appreciate your help.But how about the GPIOs part ?I still use that pins (1-6) for other purpose.I also Took a look dtsi .. and found there is no 'normal' gpio definition in dts and dtsi.Sincerely-bino-
zador.blood.stained Posted November 1, 2015 Posted November 1, 2015 Mainline kernel comes with new gpio driver with standart linux gpio interface, it doesn't need any DT configuration. You need to "export" pin number for it to appear in sysfs tree (I'm assuming you want sysfs access) Read section "Accessing the GPIO pins through sysfs with mainline kernel" here: https://linux-sunxi.org/GPIO 1
Bino Oetomo Posted November 2, 2015 Author Posted November 2, 2015 Dear Sir.I really appreciate your enlightenment ...I Got the pictureSincerely-bino- 1
zador.blood.stained Posted November 2, 2015 Posted November 2, 2015 FYI, I checked, using dtc from device-tree-compiler on device itself is an easier way to make such small changes in Device Tree config. http://forum.armbian.com/index.php/topic/382-armbian-on-pcduino3/#entry2444 Decompiled dts looks a bit different compared to original, but it's easy to find LED related sections and values. 1
valant Posted January 9, 2018 Posted January 9, 2018 "Learning device trees"! kewl. I have a question. Suppose I have a device tree where all binary data are stored in little endian order. don't you know would linux be able to consume this without choking? I mean does it unconditionally switches endianness when processing DT on arm (that mostly is LE) or does it check whether switching is needed before (for example inspecting d00dfeed magic in LE)? I have no idea where to check this.
zador.blood.stained Posted January 9, 2018 Posted January 9, 2018 45 minutes ago, valant said: Suppose I have a device tree where all binary data are stored in little endian order. AFAIK DT has fixed internal endianness (BE) regardless of platform it is used on, so if you somehow ended with compiling a LE DT you should probably throw away your libfdt implementation. However if you are talking about device register endianness then (according to a quick Google search) there are some properties that control data conversion, but these properties are most likely device driver specific: https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/tree/Documentation/devicetree/bindings/common-properties.txt In any case DT data (especially "cells") operations are wrapped with fdtxx_to_cpu and cpu_to_fdtxx and these functions are unconditionally calling BE conversion functions.
valant Posted January 9, 2018 Posted January 9, 2018 39 minutes ago, zador.blood.stained said: AFAIK DT has fixed internal endianness (BE) regardless of platform it is used on, so if you somehow ended with compiling a LE DT you should probably throw away your libfdt implementation. That was designed for BE PPC and SPARC and has no point for LE processors like arm, x86 etc. If one looks at the DT spec (OF, ePAPR) there is nothing fundamental preventing making FDT LE. It's obvious the BE-only requirement is synthetical (the same as UEFI's one on being LE-only). Would be nice if LE machines work with LE FDTs. I imagined this that way. to achieve that, there would be no need to even change the magic value. just an easy check on the aforementioned magic at the beginning and that's all, no byte swapping. But at least arm has setend instruction. not sure mips has such. I was hoping on the platform SW sophistication... 39 minutes ago, zador.blood.stained said: In any case DT data (especially "cells") operations are wrapped with fdtxx_to_cpu and cpu_to_fdtxx and these functions are unconditionally calling BE conversion functions. which, as I suspected, is not the case. Thanks. Why I need this? As I've said mips probably is not as comfortable with endianness switching. and since there is no any ACPI, DT are the option. I want and will be using LE DT inside FW. As I've said it's easy to translate it once at compile time. Instead of doing this stupid work on every boot. But if linux is not aware of such things, the FW will have to supply BE DT for it as well. At the handoff state.
zador.blood.stained Posted January 9, 2018 Posted January 9, 2018 4 minutes ago, valant said: If one looks at the DT spec (OF, ePAPR) there is nothing fundamental preventing making FDT LE. If you look at the DT specification - it explicitly states that u32, u64 values, DT header and properties are big endian (section 2.2.4 table 2.3, section 5.2, section 5.3.2). While nothing fundamental prevents you from making a LE DT-like data structure for your needs it won't be possible to call it the Device Tree because it would break the specification.
valant Posted January 9, 2018 Posted January 9, 2018 yeah, explicitly and needlessly. everytime LE processor needs to read that value, instead of just reading it, it will be switching endianness, resulting in calling (bunch of) functions, or - swapping words, dwords or even worse - 64 bit qwords. But if the DT is supplied with the matching processor endianness, everything becomes a comfortable process. That specification has been withheld a long time ago anyway. "Industry" uses ACPI, which is LE btw.) This modification is so slight, and it gives benefits that it's worth to adapt it. But I just needed to know. Thanks for the answer.
chrisf Posted January 10, 2018 Posted January 10, 2018 11 hours ago, valant said: This modification is so slight, and it gives benefits that it's worth to adapt it. Yeah, you'd be able to shave microseconds off the boot time... Meanwhile dtc gets more complex, since you need to know the endianness of the target you're compiling it for, and the kernel version that's going to read it - old kernels aren't going to change. The FDT format changes too. All that to save half a dozen instructions per 32bit value on CPU's that don't support BE, once per boot.
valant Posted January 10, 2018 Posted January 10, 2018 17 hours ago, chrisf said: Yeah, you'd be able to shave microseconds off the boot time... Meanwhile dtc gets more complex, since you need to know the endianness of the target you're compiling it for, and the kernel version that's going to read it - old kernels aren't going to change. The FDT format changes too. All that to save half a dozen instructions per 32bit value on CPU's that don't support BE, once per boot. come on, don't exaggerate. dtc doesn't need to change at all, in fact, since it runs on LE machines, it will get even less work because now swapping every word when compiling isn't needed and might be turned off. FDT format doesn't change at all. even d00dfeed signature doesn't. you just read it at the beginning and if it is d00dfeed, the endianness matches, doesn't match otherwise, from then you might complain or do byte swapping. all arm targets now are LE, it's no brainer what endianness to pick preparing an FDT for a board. But I was asking about another thing - whether linux dt parser checks the endianness or not (and got the answer). If it did, it wouldn't be necessary to swap bytes. that's all. I am not in charge of modifying an outdated standard. I didn't count how much overhead this irrational behavior incurs, but it's irrational, and that bugs me. 1. dtc runs at LE machine and compiles FDT -> first unnecessary swapping 2. on every boot and on every DT walk (? driver binding) kernel DT parser, running on LE machine does these swaps again. on every numeric entity! 3. also, possibly, FW making dynamic additions into DT "on the fly" does the same as in p. 2. Not cool.
chrisf Posted January 10, 2018 Posted January 10, 2018 1 hour ago, valant said: 1. dtc runs at LE machine and compiles FDT -> first unnecessary swapping 2. on every boot and on every DT walk (? driver binding) kernel DT parser, running on LE machine does these swaps again. on every numeric entity! 1. dtc doesn't necessarily run on the machine that loads the FDT 2. I assumed after the device tree is parsed, it's in memory as native data types. The device tree standard specifies big endian 32 and 64 bit integers, to use anything different would not be the device tree standard. I don't blame them for using the same format as nearly every network protocol out there and half the other file format specifications. ARM and MIPS are apparently both bi-endian. It's really only x86 that is little-endian only. You can use the standard hton* and ntoh* functions to convert between big endian and native endian without the code having to know what processor it's running on. The point here is code parsing an FDT doesn't need to know what endianness is being used by the processor. You don't have to read the magic 0xd00dfeed sequence to figure out the file format and convert based on that. Converting between the two isn't really that expensive. Computers do it all the time. TCP/IP headers are all big-endian My opinion on the matter is shared things, like files, network protocols, etc, all need defined byte orders. I don't care (much*) what they are, they just need to be specified. Variable byte order in files is a pain, just look at Unicode. Besides, a magic of edfe0dd0 isn't as cool as d00dfeed * I prefer big-endian, because that's how I write numbers down on paper. It's how you enter numbers in to a calculator. If you have to manually (as in with your eyes and brain) parse a file, big-endian is easier to work with.
valant Posted January 10, 2018 Posted January 10, 2018 44 minutes ago, chrisf said: 1. dtc doesn't necessarily run on the machine that loads the FDT yes, but in 99.99% of cases it rus on LE machine - either LE x86 workstation of the developer or his/her LE target. 45 minutes ago, chrisf said: ARM and MIPS are apparently both bi-endian. It's really only x86 that is little-endian only. it doesn't matter actually if you can do it better without a needless multiple times byte swapping just to stay with "the standard" you actually don't really comply with. Because this DT is not the whole Open Firmware standard anyway. 50 minutes ago, chrisf said: The point here is code parsing an FDT doesn't need to know what endianness is being used by the processor. You don't have to read the magic 0xd00dfeed sequence to figure out the file format and convert based on that. yeah. it's so much "easier" instead of ONCE checking the magic to do multiple uncoditioanal LE->BE->LE translations... heh. again, you don't need to know what is your endianness. if that is that hard. all you need is to infer, by one read and one comparison, from the value of DT magic are two endiannesses, yours and DT's, matching or not. * I prefer big-endian, because that's how I write numbers down on paper. It's how you enter numbers in to a calculator. If you have to manually (as in with your eyes and brain) parse a file, big-endian is easier to work with. And I hate BE tbh, it's an upside down notation. we write numbers on paper in the undefined order in fact, because before you know is it BE or LE, you should specify what is "the start" - in memory every byte is numbered, not that on paper. but if to count from left tor right, assuming the left of a line is start, then we write in BE. and that's because medieval europeans borrowing numbers from arabs messed up things. arabs write from right to left and so they write numbers. and when they write the number <1980> for example, they first write 0, then 8 etc. from the right to left. if you look at modern arab writings with numbers, you notice, despite letter go in the opposite order, numbers look the same (as they appear on paper). they have them right, least significant decimal comes first. this is natural way, this way we write words - first letter for the first sound comes first, not opposite. with BE, you have it all upsude down - not natural, annoying and inconvinent. imagine if you wrote this way words, so for example "I prefer BE" would be "I referp EB" Funny, why BE processors don't store strings in BE order - the biggest index of a string symbol comes first? Because BE sucks!
chrisf Posted January 10, 2018 Posted January 10, 2018 12 minutes ago, valant said: And I hate BE tbh, it's an upside down notation. we write numbers on paper in the undefined order in fact, because before you know is it BE or LE, you should specify what is "the start" - in memory every byte is numbered, not that on paper. but if to count from left tor right, assuming the left of a line is start, then we write in BE. and that's because medieval europeans borrowing numbers from arabs messed up things. arabs write from right to left and so they write numbers. and when they write the number <1980> for example, they first write 0, then 8 etc. from the right to left. if you look at modern arab writings with numbers, you notice, despite letter go in the opposite order, numbers look the same (as they appear on paper). they have them right, least significant decimal comes first. this is natural way, this way we write words - first letter for the first sound comes first, not opposite. with BE, you have it all upsude down - not natural, annoying and inconvinent. imagine if you wrote this way words, so for example "I prefer BE" would be "I referp EB" Funny, why BE processors don't store strings in BE order - the biggest index of a string symbol comes first? Because BE sucks! My preference comes from English being my first language, where I say "nineteen eighty" or "one thousand, nine hundred and eighty" and I write left-to-right Try not to confuse strings with numbers, they're completely different. You've mixed up BE and LE with your string analogy. with BE the most significant comes first (hence the "big"). The first (most significant) sound of word is (usually) the first letter. The same order as everything in the English language. The analogy completely falls apart when you reversed an acronym (an ordered representation of multiple words) without reversing the order of the words. Big endian is the order of the English language on paper. Most significant first, left-to-right. So encoding "I prefer BE" as a "BE string" would be "i prefer BE", as "LE string" would be "EB referp I" FYI: the DeviceTree standard is here: https://www.devicetree.org/ It's time for peace: https://www.ietf.org/rfc/ien/ien137.txt
valant Posted January 11, 2018 Posted January 11, 2018 1 hour ago, chrisf said: The first (most significant) sound of word is (usually) the first letter. The same order as everything in the English language. The analogy completely falls apart when you reversed an acronym (an ordered representation of multiple words) without reversing the order of the words. Sorry, this is unconvincing. "first (most significant)" Which way the first sound is "most significant"? Not determined. In fact, It's opposite. First sound is least significant. It has the smallest number associated with its "order" among other sounds in the word (the same as in numbers, powers of the base are ordered, when power=0 it's "the least significant term"). Strings represent words (as well), and are arrays of bytes each of wich has its index inside the array. Index is the number, it represents order, and you clearly can assign meaning of "being first, next, most significant least significant" (both inside the array and memory). Aand. Strings are stored so that a byte with the least significant index comes first. At both architecture variants. Why? This is LE order, why BE doesn't do it its way? Because it can't, BE is unnatural and fails to represent variable length entities in its awkward BE order. Quote FYI: the DeviceTree standard is here: https://www.devicetree.org/ wow, they managed to realease two verisons of the "specification" in just two days! well, it's all funny, but I was refering to the real specification which is Open Firmware (IEEE something :lol:) 1 hour ago, chrisf said: It's time for peace: https://www.ietf.org/rfc/ien/ien137.txt cool, there is even an rfc for this kind of time wasting!)
chrisf Posted January 11, 2018 Posted January 11, 2018 49 minutes ago, valant said: well, it's all funny, but I was refering to the real specification which is Open Firmware (IEEE something :lol:) If you're referring to IEEE-1275: https://www.openfirmware.info/data/docs/of1275.pdf or https://standards.ieee.org/findstds/standard/1275-1994.html if you feel like paying for a copy Quote The property-encoding format is independent of hardware byte order and alignment characteristics. The encoded byte order is well-defined (in particular, it is big endian). Individual items are concatenated to form composed types without any “padding” for alignment reasons.
valant Posted January 11, 2018 Posted January 11, 2018 I know! that's why I said it in the first place that this requirement is synthetic. OF isn't evolving. It's stuck. DT are pull offs from the spec. It's not an entire spec. way much not it. And those links you showed from github, this is especially shame that keeping doing something with DT "standard", they don't add such a needed and obviuos feature like BI-endianness. Maybe I should write to them? But I almost sure, we will sink in a very much "constructive" discussion on what is more natural. Nevertheless, I am writing UEFI as a hobby project, for mips/arm and if DTs will be needed there (so far they aren't), they will be LE on LE machines. And BE on BE machines (for the sake of completeness).
Recommended Posts