48 comments

  • RobotToaster an hour ago

    https://raw.githubusercontent.com/apple/ml-sharp/refs/heads/...

    "Exclusively for research purposes" so not actually open source.

      andy99 40 minutes ago

      Meta’s campaign to corrupt the meaning of Open Source was unfortunately very successful and now most people associate releasing the weights with open source.

        singpolyma3 6 minutes ago

        Releasing weights is fine but you also need to be allowed to... Use the model :P

        Blackthorn 34 minutes ago

        It's deliciously ironic how a campaign to dilute the meaning of free software ended up getting diluted itself.

          sho_hn 24 minutes ago

          It's gratifying. I used to tilt at windmills on HN about this and people would be telling me with absolute condescension how the ship had sailed regarding the definition of Open Source, relegating my own life's work to anachronism.

          People slowly waking up to how daft and hypecycle misusing a term was all along has been amazing.

            archerx 11 minutes ago

            The wildest one is how people say just because you produce open source software you should be happy that multibillion dollar corporations are leeching value from your work while not giving anything back but are in fact making your life harder. That’s the biggest piss on my back and tell me it’s raining bullshit I ever heard and makes me not want to open source a damn thing without feeling like a fool.

        ProofHouse 10 minutes ago

        Thank you! Shame all these big corps that do this forever. Meta #1, Apple # 2, psuedo fake journalists # 3

      ffsm8 41 minutes ago

      The readme doesn't claim its open source either from what I can tell. Seems to be just a misguided title by the person who submitted it to HN

      The only reference seems to be in the acknowledgement, saying that this builds ontop of open source software

      thebruce87m 3 minutes ago

      I’m going to research if I can make a profitable product from it. I’ll publish the results of course.

      zarzavat an hour ago

      There's no reason to believe that weights are copyrightable. The only reason to pay attention to this "license" is because it's enforced by Apple, in that sense they can write whatever they want in it, "this model requires giving ownership of your first born son to Apple", etc. The content is irrelevant.

      LtWorf 16 minutes ago

      When AI and open source is used together you can be sure it's not open source.

      echelon an hour ago

      That sucks.

      I'm writing open desktop software that uses WorldLabs splats for consistent location filmmaking, and it's an awesome tool:

      https://youtube.com/watch?v=iD999naQq9A

      This next year is going to be about controlling a priori what your images and videos will look like before you generate them.

      3D splats are going to be incredibly useful for film and graphics design. You can rotate the camera around and get predictable, consistent details.

      We need more Gaussian models. I hope the Chinese AI companies start building them.

      sa-code an hour ago

      Should the title be corrected to source-available?

      hwers an hour ago

      I don’t agree with this idea that for a model to be open source you have to be able to make a profit off of it. Plenty of open source code licenses doesn’t require that constraint

        tremon an hour ago

        https://opensource.org/osd#fields-of-endeavor

        > The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, [..]

        cwillu an hour ago

        And you would be wrong as a simple question of fact.

        Aachen an hour ago

        That's source-available: you get to see the code and learn from it, but if you're not allowed to use it however you want (with as only common restrictions that you must then credit the creator(s) and also allow others the same freedom on derivative works) then it's not the traditional definition of open source

        wahnfrieden an hour ago

        The only popular one I know is CC-NC but that is not open source

  • analog31 7 minutes ago

    I wonder if it helps that a lot of people take more than one picture of the same thing, thus providing them with effectively stereoscopic images.

  • hermitcrab 16 minutes ago

    "Sharp Monocular View Synthesis in Less Than a Second"

    "Less than a second" is not "instantly".

      ethmarks 9 minutes ago

      What would your definition of "instantly" be? I would argue that, compared to taking minutes or hours, taking less than a second is fast enough to be considered "instant" in the colloquial definition. I'll concede that it's not "instant" in the literal definition, but nothing is (because of the principle of locality).

  • neom 2 hours ago
  • bertili 2 hours ago
  • victormustar 29 minutes ago
  • lvl155 21 minutes ago

    I don’t know when Apple turned evil but hard for me to support them further after nearly four decades. Everything they do now is directly opposite of what they stood for in the past.

      tsunamifury 3 minutes ago

      Apple absolute Never believed in open source in the past so yes. They are not the same

  • cromulent 2 hours ago
  • d_watt an hour ago

    I’ve been using some time off to explore the space and related projects StereoCrafter and GeometryCrafter are fascinating. Applying this to video adds a temporal consistency angle that makes it way harder and compute intensive, but I’ve “spatialized” some old home videos from the Korean War and it works surprisingly well.

    https://github.com/TencentARC/StereoCrafter https://github.com/TencentARC/GeometryCrafter

      sho_hn 22 minutes ago

      I would love to see your examples.

  • jtrn an hour ago

    I was thinking of testing it, but I have an irrational hatred for Conda.

      optionalsquid an hour ago

      You could use pixi instead, as a much nicer/saner alternative to conda: https://pixi.sh

      Though in this particular case, you don't even need conda. You just need python 3.13 and a virtual environment. If you have uv installed, then it's even easier:

          git clone https://github.com/apple/ml-sharp.git
          cd ml-sharp
          uv sync
          uv run sharp
      moron4hire an hour ago

      You aren't being irrational.

      jtreminio an hour ago

      You can simply use a `uv` env instead?

  • bbstats 7 minutes ago

    would love a multi-image version of this.

  • gjsman-1000 an hour ago

    Is this the same model as the “Spatial Scenes” feature in iOS 26? If so, it’s been wildly impressive.

      alexford1987 2 minutes ago

      It seems like it, although the shipped feature doesn’t allow for as much freedom of movement as the demos linked here (which makes sense as a product decision because I assume the farther you stretch it the more likely it is to do something that breaks the illusion)

      The “scenes” from that feature are especially good for use as lock screen backgrounds

      mercwear an hour ago

      I am thinking the same thing, and I do love the effect in iOS26

  • burnt-resistor 20 minutes ago

    Damn. I recall UC Davis was working on this sort of problem for CCTV footage 20 years ago, but this is really freakin' progress now.

  • jokoon an hour ago

    does it make a mesh?

    doesn't seem very accurate, no idea of the result with a photo of large scene, that could be useful for level designers

  • Invictus0 an hour ago

    Apple is not a serious company if they can't even spin up a simple frontend for their AI innovations. I should not have to install anything to test this.

      consonaut an hour ago

      It's included in the ios photo gallery. I think this is a separate release of the tech underneath.

  • b112 2 hours ago

    Ah great. Easier for real estate agents to show slow panning around a room, with lame music.

    I guess there are other uses?? But this is just more abstracted reality. It will be innacurate just as summaried text is, and future peoples will again have no idea as to reality.

      stevep98 an hour ago

      It will be used for spatial content, for viewing in Apple Vision Pro headset.

      In fact you can already turn any photo into spatial content. I’m not sure if it’s using this algorithm or something else.

      It’s nice to view holiday photos with spatial view … it feels like you’re there again. Same with looking at photos of deceased friends and family.

      tim1994 an hour ago

      For panning you don't need a 3D view/reconstruction. This also allows translational camera movements, but only for nearby views. Maybe I am overly pedantic here, but for HN I guess thats appropriate :D

        parpfish an hour ago

        For a good slow pan, you don’t need 3d reconstruction but you DO need “Ashokan Farewell”