Talk:Apple silicon/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2

Lightning Digital AV Adapter

Does the Apple logo'd ARM chip in this count...?

https://www.theverge.com/2013/3/1/4055758/why-does-apples-lightning-to-hdmi-adapter-have-an-arm-computer-inside

https://panic.com/blog/the-lightning-digital-av-adapter-surprise/ — Preceding unsigned comment added by 91.125.45.72 (talk) 02:06, 12 January 2021 (UTC)

I'd say yes.. tentatively. This article can suddenly explode to include every Apple branded microcontroller, which there are plenty of with little to no information about, and I think that that would be outside the scope of this article. This curious part however seems to be running some sort of mini-iOS and would therefore be welcome here. Certainly up for debate. I don't know how to name or classify it though, apart from the "339S0196" that's printed on the module. -- Henriok (talk) 12:37, 12 January 2021 (UTC)
There.. made a "Miscellaneous" section and an image of it. -- Henriok (talk) 18:19, 16 January 2021 (UTC)

Thank you very much for that :) Anamyd (talk) 13:14, 16 February 2021 (UTC)

Incorrect GFLOP counts for Apple A-series processors

[Contribution by another editor was removed by that editor after Henriok's message]

Note to editors. This kind of investigation is "Original research" and can't be included in Wikipedia. Correct or not, it's not allowed. –– Henriok (talk) 13:16, 10 May 2021 (UTC)

Requested move 1 June 2021

The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review after discussing it on the closer's talk page. No further edits should be made to this discussion.

The result of the move request was: moved. (closed by non-admin page mover) Elli (talk | contribs) 20:41, 8 June 2021 (UTC)


Apple-designed processorsApple silicon – I think „Apple silicon“ is more recognizable and more natural (per WP:CRITERIA) than „Apple-designed processors“ because of its common usage by independent sources and by Apple when referring to their own silicon. It is also a more concise title (per WP:CRITERIA), that is equally precise as „Apple-designed processors“, because „Apple silicon“ also unambiguously identifies the article’s subject and distinguishes it from other subjects. It is also more consistent (WP:CONSISTENT) because „Apple silicon“ is consistent with the pattern of similar articles’ titles like MacBook Air (Apple silicon) and iMac (Apple silicon). Andibrema (talk) 00:11, 1 June 2021 (UTC)

Does Apple use that term for all Apple-designed processors, or just the M1? Guy Harris (talk) 00:27, 1 June 2021 (UTC)
All Apple-designed processors: https://www.youtube.com/watch?v=b13xnFp_LJs&t=820s Andibrema (talk) 00:32, 1 June 2021 (UTC)
  • Support, as this is the term Apple uses to refer to their own processors. -Shivertimbers433 (talk) 18:58, 8 June 2021 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

The history of the T2 reasearch

I'm very new to edits but the true history of the T2 security issues is https://blog.t8012.dev/on-bridgeos-t2-research/ with citations

Hopefully my edits fixed some inaccuracies — Preceding unsigned comment added by Richardkmark (talkcontribs) 14:20, 26 June 2021 (UTC)

Thanks for your edits Rick. In future, especially when citing yourself, it might be safer to suggest edits on the talk page first. If you post info here, most editors would be glad to help you add it to the article in a way that keeps with Wikipedia style. (And don't forget to sign your talk page comments with four tildes!) — HTGS (talk) 22:41, 26 June 2021 (UTC)
I think it's time for a separate article about the T2 chip. It's due. And the section of hacking the T2 belongs there, not here. -- Henriok (talk) 21:13, 27 June 2021 (UTC)

About arguing, GHz battling and original research

First.. edit battles should NOT be resolved in edit comments. Use the talk page for crying out loud!

Regarding Hz numbers: The figure used for clock speed of most Apple SoCs are measured by Geekbench as Apple isn't talking. Geekbench is a performance measuring tool that do multiple runs of their benchmark routines and try to figure out what clock speed the CPU is running at. Apple's SoCs are using two different cores, and Geekbench are trying their best to figure out if their tests are running on the performance cores, the efficiency cores or both, and if this is consistent over the different runs. Also while juggling performance capping due to heating. These types of processors have a dynamic core clock, where the speed is determined by some heuristic that's publicly unknown. Not only is this a calculated average, it also changes for each run depending on the different circumstances of each run, and the real maximum speed is never revealed. So please, bear this in mind when you argue whether the CPUs run at 2.98, 2.099, 3.09, 3.10 GHz.. All is true, none is true. Please resolve it to 3 GHz or something, since the uncertainty is waaaay larger than two or three significant digits can justify.

Regarding original research: Calculating theoretical performance based on GHz and reasonable assumptions about technical features in the CPU is original research and is not allowed on Wikipedia. If you have to explain your reasoning or method behind the calculation, it's original research. Figuring out the area given the length and width is not. Such calculation is obvious and isn't considered original research. Calculating the floating point performance of a GPU is. The origin of the calculation method, not the results or the calculation itself _must not be_ Wikipedia. The only way such a figure is allowed on Wikipedia is if someone else have done the reasoning, measuring and calculating, and then link to the source so that people can check if the calculation is sane or not. It's not Wikipedia's job to do that. We must not be the source of The Truth™, we are a collection of other people's research.

So Lastly: Please discuss this IN THIS THREAD. Not in edit comments. -- 15:10, 30 September 2021 (UTC)

There are two problems:
A14 CPU Clock Speed
A14 GPU FLOPS

First:
A14 CPU Clock Speed:
Anonymous's solution: 3.0 GHz
TECH_DUDE_MASTER's solution: not yet stated

There are two reputable sources:
Geekbench,
Anandtech

There are 4 candidates we have discussed:

2.99 GHz: Partially valid. Geekbench says this, and they vary by 0.001 GHz. Note that more than one CPU core may be active while the benchmark is being taken. This fact affects clock speed.

3.00 GHz (a.k.a. 2.998 GHz): Most accurate answer. Anandtech notes that the maximum speed of A14 is 2998 MHz when only ONE core is active, and that drops to 2890 MHz when TWO are active. Geekbench has something in between, which is 2990 MHz, but they may have multiple cores temporarily active. This fact can affect CPU clock speeds in ways that can't be controlled for, even if the second core is active 0.1% of the time.

Anandtech does a very deep dive into the A14, talking about clock cycle latency for individual operations, size of ROB, etc. So whatever human made the article has a lot of expertise and measures CPU metrics with more precision than an automated benchmark.

3.09 GHz: No sources state this. If any do, please state them here.

3.10 GHz: A big issue here. Several sites say this, but those are the sites that don't do their homework and copy whatever others do. Take a look at some of the sites that say this. Some have the A11 GPU at 250 GFLOPS, which is far from accurate - should be closer to 400 GFLOPS (see GPU talk below, this fact was just a proof of concept). In short, the sites that say this are much less reputable. If you can refute this claim with evidence, please do so here.

Current resolution (last edited by Anonymous): 3.09 is off the table. 3.10 GHz does not have enough evidence. 2.99 GHz has been disproved by reasoning. Change the A14 performance and efficiency cores to have 0.1 GHz precision. A14 performance: 2.998 GHz -> 3.0 GHz.

Alternative resolutions: Performance can be 3.00 GHz instead of 3.0 GHz

Note: At the A14 generation, Apple was clearly aiming to break the "3 GHz" barrier. That's not just an opinion, anyone can see that. Whatever the final resolution is, keep that in mind. Keeping 0.1 GHz precision helps demonstrate that.

A14 GPU FLOPS:
Anonymous's solution: 1.0 TFLOPS
TECH_DUDE_MASTER's solution: 0.9984 TFLOPS

Before presenting our cases. We need to verify that we both understand these facts:

  1. GPU clock speeds from the A12 onward are all FUDGED (Demonstration of what the GPU clock speeds look like). We know the ALU counts precisely, estimate (a.k.a. original research) the FLOPS with Geekbench and the rare confirmed FLOPS measurement, and fudge clock speeds to match FLOPS.
  2. GPU FLOPS counts ARE ALREADY ORIGINAL RESEARCH for A12, A12X/Z, A13, A14, A15 (please correct me if one of the listed processors has a verified source stating precisely their FLOPS). According to Henriok's standards, we should scrap everything we have there and put "TBC". However, that is not what we have done on this wiki page for the last few years. In this edit war, TECH_DUDE_MASTER has continued this precedent of using original research, using "follow what other Wiki users (original researchers) said" and saying opinions without citing sources. In this discussion, we should at least use some hard logic (such as 2.61 TFLOPS / 2 = 1.30 TFLOPS). At a minimum, such hard logic counts as Henriok's "length * width = area" and means some of what we say here will not be thrown out as "original research".
  3. M1 and A14 have the exact same architecture, except that M1 is double A14 in some factors: GPU cores, CPU performance cores, memory bandwidth. Therefore, A14 has the same ALUs per GPU core as M1. This fact is so obvious that it cannot possibly be considered original research. There is probably a source already cited in this article that can back up this claim. The rest of what I'm about to say is more questionable, but it's either that or EVERYTHING = TBC.

Note to Henriok: If the discussion below sounds too much like "original research" after BOTH of us have responded to this thread, please replace clock speed and FLOPS counts for every A-series GPU from A12 onward with "TBC". Everything from here on out is just for the sake of ending the Edit War since we all know you'll cast it out as original research anyway.

Nice summary! Kudos! I would argue another solution instead of replacing everything with TBC. Use footnotes to describe the uncertainty, and reduce the certainty of many significant figures by stating that the figures are approximations. Instead of doing "0.9987 GHz", do "ca 1 GHz" with a source or two backing that up. -- Henriok (talk) 21:45, 8 November 2021 (UTC)

Drumroll for The Debate™ between Anonymous and TECH_DUDE_MASTER:

Demonstration of such preparation — Preceding unsigned comment added by Anonymous84hgh389hg (talkcontribs) 01:45, 3 October 2021 (UTC)

Anonymous

Going by a simple calculation, A14 is 2.61 / 2 = 1.30 TFLOPS. However, A14 has lower clock speeds for CPU, so most likely for GPU as well. Multiply that number by the ratio of CPU clock speeds: 1.30 * (3.0 / 3.2) = 1.22 TFLOPS. Reduce that even further to 1.0 or 1.1 TFLOPS because the GPU consumes even more power per MHz than CPU. Look at the M1 Mac Apple event - and the graphs of performance vs. power for CPU and GPU. GPU is either 30% faster or slower at double or half power, CPU varies 40% instead.

(From the evidence directly above) With the same change in power, GPU MHz varies less. In other words, it takes more power to get the GPU MHz to increase by the same percent. It is reasonable to assume Apple would want to optimize for power on an iPhone.

Moore's law: Every two years, performance doubles (I know this is different from transistor density). Using math, that means every 1 year, performance goes up 41% on average. Ignore what Apple says about GPU generational performance improvements because it doesn't mean anything. Geekbench is the best measurement, and it does show about 41% per year. A15 (all 5 cores enabled) was a bit higher than average. Before VanishedUser put in "original research", the A14 was 10% better than A13 - at around 800 GFLOPS. That's very far below what we predict from M1 and nowhere in line with Moore's Law. The 800 GFLOPS was calculated from Apple's statement of "30% better than A12". VanisherUser set it to 1.0 TFLOPS to give a middle ground between Moore's law calculation (1.41 * 690 = 966 GFLOPS) and M1 calculation (1.22 TFLOPS).

Note about VanisherUser: VanisherUser's original research gave the correct answer but the wrong reasoning. Since M1 has the same architecture as A14, there are 512 ALUs with 2 threads per ALU. However, everything from A11 to M1 has a 1024-wide max threadgroup size, so max threadgroup size doesn't determine the ALU count.

Current resolution (last edit by Anonymous): From the evidence discussed above, A14 is clearly the year Apple broke the 1 TFLOPS (yes, TFLOPS is singular) barrier on GPU. The last verifiable FLOPS measurement was A11, with Geekbench performance on par with Intel UHD Graphics 630 (384 GFLOPS). If anything, we should be debating how to raise A14 higher and smooth out the A12 family and A13 to meet the exponential growth in between.

Alternative resolutions: A14 is 1.00 TFLOPS instead of 1.0 TFLOPS. Or, if it's changed to 1.22 TFLOPS, then A12, A12X/Z, A13 can be changed to fit exponential growth from A11 (404 GFLOPS) to A14.

TƎCH_DUDƎ_MASTƎR

Tech_dude_master has been banned from Wikipedia, so the debate winner is:

anonymous

Congrats, victor!

Requested move 4 November 2021

The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review after discussing it on the closer's talk page. No further edits should be made to this discussion.

The result of the move request was: not moved. (closed by non-admin page mover) SkyWarrior 17:31, 11 November 2021 (UTC)


Apple siliconApple Silicon – The entire title is clearly a proper noun. Apple usually stylises as lowercase, but most third party sources do not. The title does not refer to silicon made by Apple, but "Silicon" as a metonym for CPUs.

These are just some examples. They are not cherry-picked – there are indeed 3rd party sources which use the lowercase naming, but they appear to be a minority. - Estoy Aquí (talk) 16:39, 4 November 2021 (UTC)

  • Oppose: The lowercase spelling is not the minority. Please note that most publications use title case in their headlines, such as the second headline you listed, which uses the lowercase spelling in the body text. Wikipedia does not use title case in article names.

I've done a search on "apple silicon" via Startpage, which is a search engine that delivers unbiased Google results. Here are the first ten English results that qualify as a published source (WP:RS).

Once again, most of these sources use title case, so you have to look into the body text. As you can see, there are five sources that only use the lowercase spelling, three that only use the uppercase spelling and two that are mixed. In order to qualify as a common name, the uppercase spelling would at least have to have a clear majority in usage that would have been considered more prevalent than the official designation. Andibrema (talk) 18:16, 4 November 2021 (UTC)

  • Oppose: When in doubt, keep the spelling as owner intended. I don't think we should respect other parties reinventing the intended spelling. We respect all other Apple names with creative spelling style, such as iMac, iPhone, eMac and iPadOS. As well as other companies' names such as GeForce, IBM i, IntelliPoint, DirectX, NeXTSTEP and id Software. I'm certain we can find many examples of others using other spelling styles for these. -- Henriok (talk) 20:17, 4 November 2021 (UTC)
  • Oppose The use of "silicon" is metonymy here for CPU, not a brand name. But that argument supports lowercase, we wouldn't insist on "Apple Keyboards". "Apple Silicon" is not used as the capitalization by the company (except in Headline Case); while a minority of news outlets capitalize it we should not. While I would consider a proposal to go back to the old name of Apple-designed processors, that isn't proposed here and litigating that COMMONNAME discussion would be a distraction. User:力 (powera, π, ν) 22:17, 4 November 2021 (UTC)
  • Oppose: Not a proper name or exact product name, but seemingly more of a concept or category descriptive title. —⁠ ⁠BarrelProof (talk) 22:40, 4 November 2021 (UTC)
  • Oppose—The reasoning already provided is sufficient.—¿philoserf? (talk) 13:18, 5 November 2021 (UTC)
Thank you, but please note that discussions are not a vote, see also: WP:NOTAVOTE and WP:DEMOCRACY Andibrema (talk) 13:34, 5 November 2021 (UTC)
Thank you Andibrema. I have reviewed the content you shared.
I wonder, however, about the nature of discussion and consensus. If editor A in a conversation does a very thorough job of stating the case, in this text based environment, what is left for other editors to do but say, “Here, here”, “I concur”, or “Well said, mate.” All of those are a natural part of verbal discussion in groups (not following Roger’s Rules of Order) and would be understood by another editor responsible for closing the discussion and recording the consensus.
I saw my comment as an equivalent of those exclamations. I will watch some more of these discussions and note how other’s respond when a position has been well covered already. —¿philoserf? (talk) 14:03, 5 November 2021 (UTC)
Considering the unanimous opposition among the five replies, the identified sources and the discussion comments, I suggest this is ripe for a WP:SNOWCLOSE. —⁠ ⁠BarrelProof (talk) 01:19, 6 November 2021 (UTC)
  • Support per nominator and MOS:TM. When in doubt, keep the spelling as owner intended. No we don't, iPhone 5S and iPhone 6S are stylized "iPhone 5s" and "iPhone 6s" respectively but we use a capitalized S in our titles. "Apple Keyboards" is a false analogy because keyboard is a common noun for the topic, unlike "silicon" which is certainly not the WP:COMMONNAME for a computing processor. feminist (+) 17:03, 7 November 2021 (UTC)
  • To clarify, MOS:TM does not say "When in doubt, keep the spelling as owner intended." (And this RM is not about spelling – it is about capitalization.) —⁠ ⁠BarrelProof (talk) 15:39, 8 November 2021 (UTC)
  • I was responding to Henriok's comment. feminist (+) 04:04, 9 November 2021 (UTC)
Apple silicon is not a trademark, there's no source to support this; in fact, Apple's trademark list doesn't include it.[1] (It's labeled "non-exhaustive", but that's legally secure wording by Apple's lawyers - you'll find the list includes every single last trademark you've never heard of.)
Instead, WP:Article titles says that the common name should be used, which is Apple silicon. Andibrema (talk) 10:46, 9 November 2021 (UTC)

References

The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.

Proposed new table format for A-series and M series

I made a proposed table format for A-series and M-series Apple silicon as shown below:

https://en.wikipedia.org/wiki/User:KLPeople/sandbox/apple_silicon_table

The reason for doing this is because the current table are quite messy and hard to maintain, therefore I split elements from a bunch of text into small portion of cells.

Please provide any useful suggestion for this table format, thanks! KLPeople (talk) 15:36, 21 March 2022 (UTC)

Looks really nice! I have faith in you. Good work! -- Henriok (talk) 22:12, 21 March 2022 (UTC)
Thank you for seeing a need for improvement and proposing a solution (which I imagine took a lot of work and I want to respect that). I don't feel the proposed solution solves the accessibility or usability issues. Thisisnotatest (talk) 06:45, 9 June 2022 (UTC)

Accessibility issues on tables

I suggest breaking up these tables into multiple tables. Having to keep the models (rows) and concepts (columns) in mind as I scroll across and up and down creates a heavy cognitive load. I was not able to grok the table contents. And that's on a desktop with a large screen. I can only imagine it would be much worse on mobile.

Further, merged table headings are not friendly to blind people using screen readers. You could use the colgroup, rowgroup, and scope attributes on cells to make such a table accessible.[1] However, unless Wikipedia software is updated to add these automatically, editors are likely to break them.

One other advantage of not merging cells, should Wikipedia choose to implement responsive tables for mobile, is that mobile visitors would only have to scroll vertically to read the tables.[2][3]

Thisisnotatest (talk) 06:45, 9 June 2022 (UTC)

I can attest that this is a pain to navigate in the mobile version of the website. I don’t know how it could be done better though.

68.96.4.40 (talk) 03:41, 17 June 2022 (UTC)

References

  1. ^ "Tables with Irregular Headers", Web Accessibility Initiative, World Wide Web Consortium, retrieved June 8, 2022
  2. ^ Strauss, Dirk (December 19, 2018), "Responsive Tables – How To Scale Your Site Beautifully", Programming and Tech Blog, Dirk Strauss, retrieved June 8, 2022
  3. ^ Boudreaux, Ryan (August 14, 2014), "Tablesaw: Flexible tool for responsive tables", Tech Republic, TechnologyAdvice, retrieved June 8, 2022

Did Apple reduce clock speed in two consecutive generations?

Not entirely ensure how we got this data, but the tables say A13 -> A14 was doubling parallelism while reducing clock speed. Then, Apple officially said they "doubled the number of FP32 math units per shader core" in their A15 video: here at 1:00. This seems like Apple's taking things to an extreme. Note that they never released clock speed metrics for M2; someone, please find a source that benchmarks the M2's clock speed. I think they released official clock speeds for M1 or M1 Pro/Max somewhere; someone should look it up to validate. Maybe they'll do the same at the M2 Pro/Max/Ultra/Extreme event later this year. Also, there's further proof of A15 being 1.5 GFLOPS (explained below), which is weird. Their iPhones had a 50% boost in performance, while laptops had a 38% boost. Either they didn't drop clock speed as much on iPhone, or our numbers here are bad (A14 could be higher or A15 could be lower).

On to the benchmarking tactic: Apple's A14 GPU introduced a new hardware-accelerated matrix multiplication instruction (called "simdgroup_matrix" in Metal Shading Language) that utilizes 80% of the ALU power). This is good, because with A13 and A12Z (first ARM Mac DTK) it was a crappy 25% utilization. For M1 Max, they have 8.0 trillion multiplies and adds/second, with a theoretical processing power of 10.0 trillion FLOPS. I benchmarked the A15 with matrix multiplication (reproducible - just call into "MPSMatrixMultiplication") and it fell at 1.2 GFLOPS - which is exactly 80% of 1.5 GFLOPS. However, we can change any of the metrics by 10% - 8.0/10.4 TFLOPS on M1 Max instead of 8.0/10.0; and increase A14 performance estimate by 5%. This can reduce the 50% performance boost to 35% (1.35 = (100% - 10% = 90%) x 1.50), matching the M2's 38% increase.

Take this with a grain of salt, but it's not entirely worthless despite being part original research. There was some logic and pure number play in the statements above. What this means is: we know for dead sure that performance boosted by a measly ~40% with this Apple8 GPU generation (FAR behind Moore's Law and AMD/Nvidia's 150% generational boost with Ada/RDNA3 - as if Apple's waiting for M3 to produce something real - more on that below). But we don't know whether that came from (slight increase in parallelism, slight increase in clock speed) or (massive increase in parallelism, slight reduction in clock speed)!

My speculation: take this with more salt, but it's useful discussion. Apple's plans for the future. This year, Apple's releasing another generation of iPhone chips while Macs are a year behind. Why would they make an Apple9 GPU architecture? The Macs would lag even farther behind in 2023, or just skip to Apple10 in 2023. Either: they debut the incredibly energy-efficient hardware ray tracing they're been developing with PowerVR, sneak it into iPhone 14 and the AR/VR headset for early adopters - then bring the awesome tech to the Mac in 2023. Or, they keep A16 on the Apple8 generation (they're using A15 on low-end iPhone 14's this year strangely), so as to not invest so much money into a chip with little install base (only iPhone 14 Pro for a long time). Their AR/VR headset has crappy ray tracing, or debuts some offshoot that's neither Apple8 nor Apple9 but has PowerVR ray tracing (how will early devs get acquainted with this technology? - through Metal's existing software RT and an easy transition?). Then the A17/M3 generation both go from N5/N5P/N"4" node to N3 in the same year, with a new Apple9 GPU generation that has PowerVR ray tracing.

One more, final weird idea. They put PowerVR ray tracing into the M2 Pro chip, which the AR/VR headset seems like it will also use ("M1 Pro" from rumors, but using the previous generation in January 2023 would be silly). They perform the die shrink to N3 mid-generation and market it as "M2". They have every freedom to create the Apple9 GPU architecture now and ship it with premium iPhone 14 Pro (A16) and premium Macs (M2 Pro+). Meanwhile, the low-end (A15 and M2 regular) stay without this cool stuff. It's already getting hard to distinguish why you should buy iPhone 14 Pro over regular iPhone 14, except LiDAR (which is awesome!).

AppleFanatic859295 (talk) 01:37, 10 August 2022 (UTC)

One more thing (stealing an Apple slogan, pun intended) - staying at Apple8 with A16 means they can artificially boost performance by jumping clock speed from ridiculously low 600 MHz to like 900 MHz. They could also do this with M2 Pro over M2, gaining 80-100% generational boost without making an Apple9 GPU. This hypothesis relies on them having gotten to A15 by reducing clock speed. It's a motive but not proof that they lowered clock speed with A15.

AppleFanatic859295 (talk) 01:44, 10 August 2022 (UTC)

While I appreciate the effort on a personal level, since all this is unsourced, speculative, original researched material, I think we should just not include any such figures in this table. I get that it's a bummer to have the table just empty at places, but we can't just "make up" figures because they seem plausible? AND.. it's against Wikipedia's rules. So please remove. -- Henriok (talk) 19:10, 10 August 2022 (UTC)

That's an awesome idea! I put TBC like you asked someone (definitely not myself) to do a while ago (I was also not VanishedUser ;). That should stop random people from increasing the precision of A15 GPU FLOPS to 2 decimal places, or changing GPU stats at will. We have good objective estimates and cited benchmarks that agree for 1 decimal place of precision, but nothing more. The previous 1.23 and 1.54 also deviated from the limited precision of A14.

I just wanted my speculation out there, so that other people can understand what's going on with GPU stats and what might happen in future generations.

AppleFanatic859295 (talk) 04:10, 11 August 2022 (UTC)

Wait, the Apple M1 article explicitly says 16x8 ALUs per GPU core, while the M2 article explicitly says 32x8 ALUs. If we can check the sources that provided those statements, this speculation is no longer just a hypothesis. We could say for certain that Apple lowered clock speeds between M1 and M2, otherwise how do they have less than 2x the performance? Then, we can easily find a source (Metal Feature Set Tables) proving A15 and M2 are the same family.

AppleFanatic859295 (talk) 04:16, 11 August 2022 (UTC)

Sweep of the GPU stats

My edit history had some typos in comments. I meant A14 has **less** FP32 power per core. And I didn't mean to say "128 Int32" instructions twice. The second one was supposed to be "128 FP16" which is always the case across Apple's lineup. I did find a recently published source that proved these stats. CPU stats were just from actually reading the cited sources, finding they didn't say what we put on Wikipedia. Also several YouTube videos show Powermetrics being run on M1/M2 Macs. They physically disprove claims like "M2 has only 3.46 GHz" when it actually has 3.504 GHz. — Preceding unsigned comment added by User243422224 (talkcontribs) 00:41, 18 January 2023 (UTC)

For gods sake, never cite CPU-monkey. They regurgitated an unsourced claim made by someone on Wikipedia, that Apple halved the clock speed from A14 to A15. This is that effect where Wikipedia becomes the source, which media cites, which Wikipedia cites. Just make a de facto ban on CPU monkey, it'll solve so many problems.

I did find a link for M2 GPU clock speed (1398 MHz), where the web page actually showed the image. Powermetrics also showed the CPU as 3504 MHz: https://www.gizchina.com/2022/06/29/the-m2-macbook-pro-cooling-system-could-not-cope-with-intensive-workloads/

Tables are unbelievably bloated.

Tables requires 3 4K screens in width to read, yet half of columns are pointless and useless. Tables needs to be optimized. Elk Salmon (talk) 20:08, 13 January 2023 (UTC)

I like the changes you did to the "M series" - "List of processors". No information lost, but more dense. Intg (talk) 00:58, 14 January 2023 (UTC)
EUs are one pointless column! User243422224 (talk) 00:48, 18 January 2023 (UTC)

Products using U1

The U1 chip is used in the second generation HomePod 154.160.14.85 (talk) 12:39, 3 March 2023 (UTC)