• Heavybell@lemmy.world
    link
    fedilink
    arrow-up
    53
    arrow-down
    1
    ·
    1 year ago

    Why is everything RISC-V some low power device, I want a workstation with PCIe 5.0 powered by RISC-V.

        • oo1@kbin.social
          link
          fedilink
          arrow-up
          16
          arrow-down
          1
          ·
          1 year ago

          I’d guess they’d need to figure out whatever apple did with it’s arm chips.
          efficient use of many-cores and probably some fancy caching arrangement.

          It’ll may also be a matter of financing to be able to afford (compete with intel, apple, amd, nvidia) to book the most advanced manufacturing for decent sized batches of more complex chips.

          Once they have proven reliable core/chip designs , supporting more products and a growing market share, I imagine more financing doors will open.

          I’d guess risc-v is mostly financed by industry consortia maybe involving some governments so it might not be about investor finance, but these funders will want to see progress towards their goals. If most of them want replacements for embedded low power arm chips, that’s what they’re going to prioritise over consumer / powerful standalone workstations.

        • duncesplayed@lemmy.one
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          At a minimum they’ve got to design a wider issue. Current high-performance superscalar chips like the XuanTie 910 (what this laptop’s SoC are built around) are only triple-issue (3-wide superscalar), which gives a theoretical maximum of 3 ipc per core. (And even by RISC standards, RISC-V has pretty “small” instructions, so 3 ipc isn’t much compared to 3 ipc even on ARM. E.g., RISC-V does not have any comparison instructions, so comparisons need to be composed of at least a few more elementary instructions). As you widen the issue, that complicates the pipelining (and detecting pipeline hazards).

          There’s also some speculation that people are going to have to move to macro-op fusion, instead of implementing the ISA directly. I don’t think anyone’s actually done that in production yet (the macro-op fusion paper everyone links to was just one research project at a university and I haven’t seen it done for real yet). If that happens, that’s going to complicate the core design quite a lot.

          None of these things are insurmountable. They just take people and time.

          I suspect manufacturing is probably a big obstacle, too, but I know quite a bit less about that side of things. I mean a lot of companies are already fabbing RISC-V using modern transistor technologies.

    • oo1@kbin.social
      link
      fedilink
      arrow-up
      23
      arrow-down
      2
      ·
      1 year ago

      I think that’s the whole point of all risc - it saves power over cisc but may take longer to compute some tasks.

      That’d be why things like phones with limited batteries often prefer risc.

      • jdaxe@infosec.pub
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        That’s true for small and simple microcontrollers, but larger and more complicated ones can theoretically implement macro operation fusion in hardware to get similar benefits as CISC architectures

      • duncesplayed@lemmy.one
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It definitely could scale up. The question is who is willing to scale it up? It takes a lot less manpower, a lot less investment, and a lot less time to design a low-power core, which is why those have come to market first. Eventually someone’s going to make a beast of a RISC-V core, though.

          • intrepid@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            China is the main driver of growth in RISC-V currently. But we need to see how the trade wars will affect that. There was a recent news about RISC-V specifically in this regard.

            We might also see more activity from Intel, Qualcomm and Nvidia.

          • merthyr1831@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Excl. Nation-states which have their own strategic reasons- NVidia, Google, Amazon, IBM, almost every single big cloud player are going to begin investing in RISC-V as it matures.

            ARM charges a lot for its licensing and that’s only going up in the near future. x86 is simply too expensive to compete for unless you’re AMD or Intel.

            At some point the Cloud CPU players are gonna jump on RISC for the cost savings, and the prospect of building their own platforms without licensing fees and lack of input on the direction of the ISA.

    • wiki_me@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      milk-v is going to release a pretty powerful system, iirc i read it will be released in about 10 months, ventana also reportedly will release a server cpu in 2024.

    • skilltheamps@feddit.de
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      It takes time, as it all is under heavy development. Just since very recently there are risc v sbc available that can run linux - before it was pretty much microcontrollers only. Be patient :)

    • qaz@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      There is the 64 core, 32-128GB DDR4 Milk-V Pioneer, but it uses PCIe 4.0

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Even once the kinks are worked out, the primary market for RISC-V will be low-end. It’s a FOSS (FOSH?) upgrade path from 8-bit and 16-bit ISAs.

      There will be no reason for embedded systems to use ARM.

      • nickwitha_k (he/him)@lemmy.sdf.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Initial market, absolutely. It’s already there at this point. Low power 32-bit ARM SoC MCUs have largely replaced the 8-bit and 16-bit AVR MCUs, as well as MIPS in new designs. They’ve just been priced so well for performance and relative cost savings on the software/firmware dev side (ex. Rust can run with its std library on Espressif chips, making development much quicker and easier).

        With ARM licensing looking less and less tenable, more companies are also moving to RISC-V from it, especially if they have in-house chip architects. So, I also suspect that it will supplant ARM in such use cases - we’re already seeing such in hobbyist-oriented boards, including some that use a RISC-V processor as an ultra-low-power co-processor for beefier ARM multi-core SoCs.

        That said, unless there’s government intervention to kill RISC-V, under the guise of chip-war (but really likely because of ARM “campaign contributions”), I suspect that we’ll have desktop-class machines sooner than later (before the end of the decade).

        • mindbleach@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I would’ve had my doubts, until Apple somehow made ARM competitive with x86. A trick they couldn’t pull off with PowerPC.

          I guess linear speed barely ought to matter, these days, since parallelism is an order-of-magnitude improvement, and scales.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I would’ve had my doubts, until Apple somehow made ARM competitive with x86. A trick they couldn’t pull off with PowerPC.

            Yeah. From what I’ve pieced together, Apple’s dropping PowerPC ultimately came down to perf/watt and delays in delivery from IBM of a suitable chip that could be used in a laptop and support 64-bit instructions. x86 beat them to the punch and was MUCH more suitable for laptops.

            Interestingly, the mix of a desire for greater vertical integration and chasing perf/watt is likely why they went ARM. With their license, they have a huge amount of flexibility and are able to significantly customize the designs from ARM, letting them optimize in ways that Intel and AMD just wouldn’t allow.

            I guess linear speed barely ought to matter, these days, since parallelism is an order-of-magnitude improvement, and scales.

            It is definitely a complicated picture, when figuring out performance. Lots of potential factors come together to make the whole picture. You’ve got ops power clock cycle per core, physical size of a core (RISC generally has fewer transistors per core, making them smaller and more even), integrated memory, on-die co-processors, etc. The more that the angry little pixies can do in a smaller area, the less heat is generated and the faster they can reach their destinations.

            ARM, being a mature, and customizable RISC arch really should be able to chomp into x86 market share. RISC-V, while younger, has been and to grow an advance at a pace not seen before, to my knowledge, thanks to its open nature. More companies are able to experiment and try novel architectures than under x86 or ARM. The ISA is what’s gotten me excited again about hardware and learning how it’s made.

            • mindbleach@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              1 year ago

              With their license, they have a huge amount of flexibility and are able to significantly customize the designs from ARM, letting them optimize in ways that Intel and AMD just wouldn’t allow.

              An opportunity RISC-V will offer to anyone with a billion dollars lying around.

              ARM, being a mature, and customizable RISC arch really should be able to chomp into x86 market share.

              x86 market share is 99.999% driven by published software. Microsoft already tried expanding Windows, and being Microsoft, made half a dozen of the worst decisions simultaneously. Linux dorks (hi) have the freedom to shift over to whatever, give or take some Wine holdovers. Apple just dictated what would change, because you can do that when you’re a petit monopoly.

              What’s really going to threaten x86 are user-mode emulators like box86, fex-emu, and qemu-user. That witchcraft turns Windows/x86 binaries into something like Java: it will run poorly, but it will run. Right now those projects mostly target ARM, obviously. But there’s no reason they have to. Just melting things down to LLVM or Mono would let any native back-end run up-to-date software on esoteric hardware.

              • nickwitha_k (he/him)@lemmy.sdf.org
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                An opportunity RISC-V will offer to anyone with a billion dollars lying around.

                Exactly this. Nvidia and Seagate, among others, have already hopped on this. I hold out hope for more accessible custom processors that would enable hobbyists and smaller companies to join in as well, and make established companies more inclined to try novel designs.

                x86 market share is 99.999% driven by published software. Microsoft already tried expanding Windows, and being Microsoft, made half a dozen of the worst decisions simultaneously.

                Indeed. I’ve read opinions that that was historically also a significant factor in PowerPC’s failure - noone is going to want to use your architecture, if there is no software for it. I’m still rather left scratching my head at a lot of MS’s decisions on their OS and device support. IIRC, they may finally be having an approach to drivers that’s more similar to Linux, but, without being a bit more open with their APIs, I’m not sure how that will work.

                Linux dorks (hi)

                Hello! 0/

                What’s really going to threaten x86 are user-mode emulators like box86, fex-emu, and qemu-user. That witchcraft turns Windows/x86 binaries into something like Java: it will run poorly, but it will run.

                Hrm…I wonder if there’s some middle ground or synergy to be had with the kind of witchcraft that Apple is doing with their Rosetta translation layer (though, I think that also has hardware components).

                Right now those projects mostly target ARM, obviously. But there’s no reason they have to. Just melting things down to LLVM or Mono would let any native back-end run up-to-date software on esoteric hardware.

                That would be brilliant.

                • mindbleach@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  IIRC Apple’s ARM implementation has a lot of extensions that coincidentally work just like x86.

                  Frankly I’m gobsmacked at how many “universal binary” formats are just two native executables in a trenchcoat. Especially after MS and Apple both got deep into intermediate representation formats. Even a static machine-code-only segment would simplify the hell out of emulation.

    • nickwitha_k (he/him)@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Me too. Hell, I’d settle for a multi-core RV64GC processor offered as a bare chip and socket since I’ve always wanted to give building a motherboard a try but, the dev systems available seem to have everything soldered :(

  • supersane@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Does RISC-V have security benefits since it is open source? Is it easier to detect hardware backdoors if it is used instead of x86 or ARM?

    • intrepid@lemmy.ca
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      RISC-V instruction set (ISA) is open source. But the actual implementation (microarchitecture) has no such obligations. And among the implementations that can run Linux, none (that I know) are open source designs.

      With regards to hardware backdoors - no, closed source RISC-V implementations are not easier than x86 or ARM to audit for security.

    • ReakDuck@lemmy.ml
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      1 year ago

      I think the CPU chips themselves are closed source but the architecture is open under MIT so this means anyone can close them

  • fuckwit_mcbumcrumble@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    3
    ·
    1 year ago

    The Pad 4A is a bit more interesting to me. 1280x800 is really awful in 2023. But the pad 4A has a 10" 1920x1200 display which would be so much nicer in a small form factor laptop.

    • notthebees@reddthat.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      While I agree with you with the 16:10 display being nicer, in terms of size. 1280x800 isn’t bad once you take into consideration of screen size. Like the ppi for both displays are in the low 200s. A 1080p 15.6 in display has a lower ppi than both of those.

      • fuckwit_mcbumcrumble@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        To me it’s less about the PPI and more the ability to fit things on the screen.

        1280x800 is just small enough that that certain elements might not fit on the screen. Or if they do they just barely fit with no wiggle room. 1920x1200 is probably unreadable to even freaks like me (I run 150% scaling on a 16” 4K display) but it gives me the option to turn off/down scaling and actually fit things when needed.

    • merthyr1831@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I use a 1280x800 on my steamdeck and honestly its fine for 90% of stuff as long as it can scale properly. Am I the only person who ran a 720p monitor back when people were just getting into 4k?

      • fuckwit_mcbumcrumble@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I ran 1280x800 and 1366x768 for years and hated it. After the retina MBP came out and embarrassed everyone I vowed I’d never go back.

        1080p is the minimum I’ll do at this point for a modern device.

  • Gork@lemm.ee
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    Hmmm I wonder if it’s possible to hack together that tiny keyboard together with a Steam Deck…

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    This is the best summary I could come up with:


    Known as the Lichee Console 4A, the laptop features a display size of just 7 inches, 16GB of memory, and an LM4A TH1520 processor.

    Despite its small size, the Lichee Console 4A packs the features and functionality that you’d generally expect from a mainstream x86 laptop in this price range: LPDDR4X memory, 128GB of eMMC storage, and an optional external NGFF SSD.

    Display-wise, the video resolution of the 7-inch display is 1280 x 800 featuring capacitive touch touchscreen support, plus a mini HDMI port for external monitor output.

    There’s also a 2MP front camera that should suffice for basic web calling.

    Additionally, there’s also a microSD slot reader, which can expand the device’s storage on top of what it already has.

    Other miscellaneous specs include a battery capacity of 3000mAh, RedPoint (seemingly a copy of Lenovo’s TrackPoint), a 72-key keyboard, an aluminum outer shell, and a weight of 650 grams.


    The original article contains 295 words, the summary contains 150 words. Saved 49%. I’m a bot and I’m open source!

        • aperson@beehaw.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          You would have to get a special version of lwjgl for it to even run on risc, and this thing doesn’t have any dedicated graphics hardware. The one guide I saw had Minecraft running on similarish hardware at 2fps.

            • aperson@beehaw.org
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              I would say yes, but probably not for a lot of users. Minecraft isn’t inherently threaded, and the individual cpu cores on this aren’t super fast (though pretty decent). Another bottleneck would be the io speed, which I have no clue on. Also, why the hell would you run a server on a new laptop when you can buy one of their other pieces of hardware for cheaper?