2763
IAmA CPU Architect and Designer at Intel, AMA.
Proof: Intel Blue Badge
Hello reddit,
I've been involved in many of Intel's flagship processors from the past few years and working on the next generation. More specifically, Nehalem (45nm), Westmere (32nm), Haswell (22nm), and Broadwell (14nm).
In technical aspects, I've been involved in planning, architecture, logic design, circuit design, layout, pre- and post-silicon validation. I've also been involved in hiring and liaising with university research groups.
I'll try to answer in appropriate, non-Confidential detail any question. Any question is fair.
And please note that any opinions are mine and mine alone.
Thanks!
Update 0: I haven't stopped responding to your questions since I started. Very illuminating! I'm trying to get to each and every one of you as your interest is very much appreciated. I'm taking a small break and will resume at 6PM PST.
Update 1: Taking another break. Will continue later.
Update 2: Still going at it.
jecb821 karma
They have fantastic people. I cannot underscore this enough, with the resources they have the fact that they're able to compete in the same ballpark we do shows their quality. Sadly for all of us, execution is key. We want to see an exciting marketplace as much as you do.
MagmaiKH244 karma
You mean something like selling all of your fabrication capacity is a bad idea?
Or do they have design issues now and are falling behind?
jecb410 karma
AMD had to sell their fabs, otherwise they wouldn't be in business today. There are advantages to having fabs. You'll see many things, especially with Broadwell, that you cannot do without owning a fab.
The design issues we see today are things that happened a year, maybe a year-and-a-half, ago. But they were falling behind behind and it's hard to recover.
jzwinck136 karma
Can you elaborate on the benefits expected for consumers using Broadwell that would not be possible without Intel owning a fab?
AMD_GPU_Designer86 karma
Thanks for this :). We at AMD (especially on the GPU side) have a intense amount of respect for the engineers over at Intel. What Intel has done with their recent CPU architecture, along with the constant advances in fabrication technology, they deserve a lot of credit for "keeping the ball moving forward" in our industry.
To support jecb's argument, you often hear of negative press going on between the two companies, but that kind of animosity is largely isolated to the legal, marketing, and upper management levels. The engineers at most companies tend to have many good friends working for competitors, and while we might throw in a friendly jab every now and then, it's almost a universally friendly community.
Thanks for this AMA. It's always cool to hear what it's like on the blue team :).
GrizzledAdams494 karma
Hi, I'm so glad you are on! I am an armchair enthusiast on the subject, as I grew up watching the duel between Intel and AMD (and NVIDIA and ATI/AMD). The work you do has been as inspiring to me as the space race, although its not quite as glamorized in the public. Thank you for the hard work, and VERY god job on all the processors you guys have been putting out. Thank your colleagues for me!
I do have some questions: [Please answer as many as you have time for. No need to do it all in one response!]
- How well do you think journalists (Anandtech in particular) cover your latest architecture? It seems to go over a majority of the tech-news writers' heads...
- When journalists attempt to pry into your CPUs to determine some undisclosed facts that you never released, how often do they hit the nail on the head? Is withholding this information from them just a cat and mouse game?
- Where do you read your tech news?
- Are you worried at all about reverse engineering efforts by competitors (ie is this common in your field)? Are protection mechanisms designed at the manufacturing/materials level or logical/layout level?
- Was Global Foundary's split from AMD a wise move? Purchase of ATI wise?
- Pentium 4 era was obviously a cluster-f. Interesting enough, AMD has started to wander down this path in their latest processors -- hoping to get to higher frequencies with longer pipelines. There has to be SOME technical justification for AMD to 'repeat' the past's mistakes. What technical detail is elusively close yet fails to be reached time and again?
- How many transistors/gates are hand-laid vs computer synthesized on a modern processor? Old Pentium Pro era videos shown at colleges indicated a heavy reliance on manual design for large sections, although I assume this isn't the case anymore.
- Would an undergraduate EE have any possibility of an interview? Or is a Master essentially required?
Thank you very much for your time! It would make my Christmas to hear a response.
jecb612 karma
I will thank them all for you. The space race is what got me interested in engineering so I know where you're coming from.
Here are your answers (some long because you asked some great questions): 1. Anandtech and Real World Tech (sometimes The Tech Report) are the best sites with the most accurate information. Especially with Real World Tech, we are sometimes surprised at the accuracy of many of the inferences. Anandtech's latest Haswell preview is also excellent; missing some key puzzle pieces to complete the picture and answer some open questions or correct some details but otherwise great. 2. They get close (see above). There are a couple of things to note here: sometimes the architectural information is not enough, the circuit implementation is incredibly important and that is not often discussed. I guess it's lower on the totem pole. Sometimes we do keep some information from the press that end up in patents, conference papers, etc... But eventually we disclose everything, I think is because we try to outdo ourselves every generation as well as being proud and wanting to share our accomplishment. Ask Apple for a disclosure of Swift. 3. I like Real World Tech the most and find that Anandtech and The Tech Report do good jobs too. I also read Semiaccurate for its humor value and to level set. 4. No and there are little protection mechanisms once it's in customer's hands. By the time they're able to reverse engineering, we're on to the next thing. And even then their implementations tend to not be as good (see AMD power gate efficiency and leakage). Here I referred only to hardware/circuits because security features are a different matter. 5. They didn't have a choice if they wanted to stay in business. They do not have enough silicon revenue to sustain it. In retrospect the ATI purchase was necessary, the sad part is they did overpay by a large margin. Also execution missteps in coming out with their "APUs" allowed us to come very close. 6. In my mind, Netburst, much as it's maligned, brought some very good things internally for Intel design teams. First, unbelievable circuit expertise (the FP logic was running at 8GHz in Prescott stock!). Next, the trace cache which you can see reimplemented in Sandy and Ivy Bridge. Also, SMT. Building a validation team that could validate the beast pre- and post-silicon. The power-perf thinking i.e. frequency through power savings. Finally, the development of tools and project management required to do that kind of extreme design. All of these learning continue to this day and it's a very large contributor to why in client and server CPUs Intel can sustain the roadmap we have. 7. I can't say. But the most important, performance and power sensitive parts are still hand-drawn. Otherwise you can't get past around 1.8GHz on Intel 22nm without losing too much perf from overhead. 8. Yes, we have tons of UG interns and most of our hires have BS. MS is always helpful, but do it for your own personal growth and interest, not to get a job. If you're interested PM me.
peanutbird886 karma
Here’s the same thing again but readable.
I will thank them all for you. The space race is what got me interested in engineering so I know where you're coming from. Here are your answers (some long because you asked some great questions):
Anandtech and Real World Tech (sometimes The Tech Report) are the best sites with the most accurate information. Especially with Real World Tech, we are sometimes surprised at the accuracy of many of the inferences. Anandtech's latest Haswell preview is also excellent; missing some key puzzle pieces to complete the picture and answer some open questions or correct some details but otherwise great.
They get close (see above). There are a couple of things to note here: sometimes the architectural information is not enough, the circuit implementation is incredibly important and that is not often discussed. I guess it's lower on the totem pole. Sometimes we do keep some information from the press that end up in patents, conference papers, etc... But eventually we disclose everything, I think is because we try to outdo ourselves every generation as well as being proud and wanting to share our accomplishment. Ask Apple for a disclosure of Swift.
I like Real World Tech the most and find that Anandtech and The Tech Report do good jobs too. I also read Semiaccurate for its humor value and to level set.
No and there are little protection mechanisms once it's in customer's hands. By the time they're able to reverse engineering, we're on to the next thing. And even then their implementations tend to not be as good (see AMD power gate efficiency and leakage). Here I referred only to hardware/circuits because security features are a different matter.
They didn't have a choice if they wanted to stay in business. They do not have enough silicon revenue to sustain it. In retrospect the ATI purchase was necessary, the sad part is they did overpay by a large margin. Also execution missteps in coming out with their "APUs" allowed us to come very close.
In my mind, Netburst, much as it's maligned, brought some very good things internally for Intel design teams. First, unbelievable circuit expertise (the FP logic was running at 8GHz in Prescott stock!). Next, the trace cache which you can see reimplemented in Sandy and Ivy Bridge. Also, SMT. Building a validation team that could validate the beast pre- and post-silicon. The power-perf thinking i.e. frequency through power savings. Finally, the development of tools and project management required to do that kind of extreme design. All of these learning continue to this day and it's a very large contributor to why in client and server CPUs Intel can sustain the roadmap we have.
I can't say. But the most important, performance and power sensitive parts are still hand-drawn. Otherwise you can't get past around 1.8GHz on Intel 22nm without losing too much perf from overhead.
Yes, we have tons of UG interns and most of our hires have BS. MS is always helpful, but do it for your own personal growth and interest, not to get a job. If you're interested PM me.
TexasTango376 karma
I thought I knew enough about PC's to get by. I don't know what the heck you guys are talking about
TexasTango64 karma
Thanks :) So how would I get into a job like yours ?. And I gotta ask AMD vs Intel
jecb162 karma
Spend many hours thinking about all sorts of circuits and architectures and be prepared for a very technical interview.
AMD vs. Intel: I know how Intel processors are made and validated, I would trust my life to them.
SaawerKraut375 karma
What is your educational and work experience background? I'm an EE undergrad and working for a place like intel sounds extremely interesting, what kind of knowledge would I need for a job like yours?
jecb496 karma
I have a BS and MS and have mostly worked in circuit design. Interest for the most part, willingness to learn helps a lot. We have a ton of internships every summer and you can start there as an undergrad. PM me if you want to send me your resume.
Personally, my EE coursework was very circuits-heavy. Particularly VLSI, but analog is essential to ace interviews. Comp. Arch. and device physics concentrations also help. And please, be sure you can code (any C-like language at least) and understand statistics. Skills beyond the technical are necessary to get more interesting work as well so be sure to also develop those.
urda138 karma
I take it you do a lot of work in Verilog? I got a taste of Computer Architecture and Chip design in my Masters program this fall as a Computer Engineer, granted it was all VHDL and just FPGAs.
But learning all about the packaging methods, development, and life-cycle planning was a blast. It was a course that was a big eye opener for me as a Software Engineer, so thanks for all your hardwork on these low levels so guys like me can keep our abstractions :P
jecb167 karma
Glad to have helped. Verilog and FPGAs is how I started. So you're getting there.
theidleprophet342 karma
I read that social coffee breaks cause an increase in productivity in certain fields because people who are struggling with problems can discuss.
What kind of breaks do you have?
Do you work in teams when designing?
What level of autonomy do you have when someone comes up with a promising idea?
Does Intel work hard to keep its employees happy?
What specific problem (among those you can disclose) took you the longest to solve and how did you finally solve it?
jecb458 karma
- Most days I work my own hours. I like to work nights and have the flexibility of working remotely. That said, my colleagues and I do work many hours; weekends, holidays, and being on call 24 hours a day especially when we get silicon back from the fab is not abnormal.
- Yes, traditionally we divide the design into functional blocks and small teams are responsible for that part. The more fun parts are what we call horizontal domains, where one small team is responsible for something chip-wide.
- Yes, but the bar is very high to get anything into silicon. But anyone can propose a feature.
- Yes, believe it or not. This week we had free movie tickets, the week before NBA tix, and the week before passes to Nike and Adidas employee stores (50% off!). Also there is the free fruit, coffee, soda, and popcorn.
- This is an oversimplification, but the problems we solve when distilled into their most basic form are simple. One challenge is that our designs now have over a billion transistors. In general, post-silicon debug is a HUGE challenge because of limited visibility.
amogrr68 karma
Assuming you're in OR, all the free movie tickets are only in Cornelius. A pain to get to :(
jecb145 karma
No longer true. And I do take coffee breaks all the time (except I'm not a coffee fan, so it ends up being water and soda).
just2quixotic20 karma
One challenge is that our designs now have over a billion transistors. In general, post-silicon debug is a HUGE challenge because of limited visibility.
I do believe I just read a profound understatement.
asperous144 karma
As a response to 4,
I worked at Intel as an Intern and I quit within a few weeks. Obviously everyone's experience is different and every person is different, but Intel is HUGE and it's easy to get lost phyically, socially, etc. The work I was doing didn't matter really (it was automated testing). They had outsourced most of my lab so it felt really empty.
I would walk through the cubicals during the afternoon and people were napping, playing games, etc. It just felt like everything that I always dredded as a "job" growing up.
Constrast this with another intership I had (computer science in both), in a small business (30-40 employees) in Portland, where everyone knew each other, and the workspace was open (agile programming) and everything was quickly moving and high-energy.
Two completly different worlds.
Reddidactyl65 karma
My computer is being dumb and cant copy and paste but check out youtube for when conan went to intel. The cubicle enviroment looked horrible
Edit. Direct Link
http://www.youtube.com/watch?feature=player_detailpage&v=MaaBPRnGJSo#t=51s
jecb164 karma
Some people don't like it, I like that my cubicle is bigger than my boss's and my boss's boss.
annYongASAURUS54 karma
Your boss's boss has a cubicle? I can see why that sort of enviroment isn't for everybody.
ProfLacoste108 karma
As an Architect (not a "qualifier architect" (i.e. "software architect"), but an actual Architect), I can tell you that there is an upper limit on how many corners a building can have, and thus, how many bosses can have corner offices.
jecb438 karma
Yes, next to the door and every stall. The way the holders are designed you actually need to close the laptop to get it to fit in the holder. Sorry it's not as gross as you picture it.
Tummmymunster49 karma
Is it possible for you to post a picture of these holders? Seems nifty.
colaxs237 karma
Thanks for the AMA. I built my PC with the i5 2400 and it's been brilliant for gaming.
My question is, will Intel continue to lock OC capabilities in non K Cpus in the near future? are there any plans to unlock it for models once they get outdated? (for e.g., my CPU in an H67 motherboard).
Second is, from a design point of view, would combining the chipset and CPUs (that raised the hackles of everyone online) offer substantial benefits performance wise?
Third, what are the fab labs like? I imagine them to be something out of 2001: a space odyssey.
Thanks for the brilliant job you guys have been doing since the Conroe days :D
jecb329 karma
Glad to hear about the gaming rig.
SKUing is a sensitive topic. Once the fuses are blown, it's not reversible. So no on the unlocking portion for the time being. For us as architects, we have a team dedicated to putting in overclocking features into the designs and tests in place to cherry pick those parts to box and sell as such. So you are getting parts on the good side of the normal when you buy K CPUs.
Integrating chipset and CPU has absolutely no performance benefit. It actually makes things harder because there are a lot of IOs in the chipset that you need to move into lead process that do not benefit in performance, i.e. SATA3 and USB3 run at fixed BW.
Fabs are unbelievable. Sadly these days there's too much automation for humans to do much except get hurt. There is a lot of nasty stuff in there.
Thanks for the compliments! We all appreciate it.
colaxs156 karma
Thanks so much for replying. My cousin has been working in Intel for more than 10 years now. I used to taunt her back when I had an AMD Athlon 64 3000+ PC. She laughed and mocked me when the E6600 was released :)
Complete N00b question here. How do you safeguard your design secrets? What prevents AMD (theoretically) from buying an Intel CPU and open it up and put it under electron microscopes and reverse engineer tech?
Cheers
jecb270 karma
No prob. Cycles of ridicule are fun.
Again, this is not the official Intel stance on protecting confidential data. What we do is so complicated these days and based on the collective experience of the entire design team, that I can give Apple/AMD/Qualcomm/Nvidia the multi million-line RTL model for Haswell and they will not come up as good a design as we have given similar time constraints. We won't but it should give you an idea of the problem complexity size.
And nothing prevents Intel from opening up competitors' products and doing the same either. But architecture and design teams stay away from that for infringement reasons.
On another note, most of the information at Intel is out in the open yet very little gets out. It kind of gives you faith in the honesty (and smarts) of the large majority of Intel employees. It's laughable too how the information that gets out is easily traceable too. For example, it's clear that the people who leak to semiaccurate work very far removed from any real engineering work. We have a laugh when they get code names wrong or announce features that don't exist.
johnparkhill235 karma
Awesome IAmA. I'm a scientist at Harvard. I write high-performance code for your CPU's using the ICC suite.
I'm hoping that this whole GPU thing will blow-over and the Phi will deliver similar FLOPs/Dollar in shared-memory teraflop desktops without the tedious coding.
At this point do you think I can skip fiddling with GPU if I haven't already? If the Phi retains full x86 instruction sets on each core, I'm certain it can't match the power-consumption of a GPU (is that true?)... Even so, I don't really care.... I just want my 200x speedup on DGEMM without having to do much more than usual C++ with some compiler flags. Is that going to be the way, or should I bother learning CUDA?
jecb232 karma
Hello. That's the idea, I coded some seismic diffraction code in CUDA and it was a horrible experience! And getting the data into the GPU and out ended up nullifying all of my speedup! So I'm really happy with the Phi (Knights family) stuff my friends are working on. My hope is that I mock them enough to shame them onto delivering a great solution and the coding part is simply compiler flags in ICC.
If you want to fiddle with the GPU, you'll learn about GPU architecture. That is never a bad thing, but I do work in hardware. Why do you assume x86 decoding is not power-efficient in this day and age? I guarantee you won't notice it if you move large data sets around.
My suggestion: If you're a scientist, try to get yourself a Xeon Phi SDV to try out.
jecb684 karma
Making our ideas reality. You can have the greatest idea in the world, but the physical world does not like to cooperate.
theidleprophet399 karma
You can have the greatest idea in the world, but the physical world does not like to cooperate.
I'm going to go ahead and steal this line.
edwin_on_reddit197 karma
Are there any new breakthroughs in developing more reliable lead free solders? Thanks a lot for doing this. I understand if my question isn't in your domain.
jecb237 karma
Packaging technology is something that will be changing dramatically with our 2013 family to make it more environmentally friendly and support new stuff we are putting in the silicon. Whether Pb-free solder is part of that, I honestly don't know.
Great question though, I'll ask one of my colleagues and try to get back to you.
raptorlightning160 karma
Quick question that may elicit a longer response, and I apologize for the potential bluntness:
When do you expect the x86 (x64) architecture to die? We are all aware of the quickly approaching constraints silicon poses to progress, especially in the processing world, and wonder how far you, or the company as a whole, see it possible to be able to keep pushing an old (but constantly patched) architecture? What innovations would have to come about to radically push the computing world ahead by leaps and bounds towards something that is truly, fundamentally, new technology?
jecb261 karma
x86/x64 will not die. Contrary to popular opinion, this is not bad. I've worked on the instruction decoder and there is a lot of FUD out there. Every time we look at adopting another ISA, it makes no technical sense. Do you mean evolving the P6 microarchitecture?
I don't work on process technology so the best thing I can tell you on that is that every time I come out of a process disclosure I'm aghast. The technology development folks are really, really good at what they do. I feel good.
To radically push computing, we need to change the software paradigms to start. We're putting TSX into Haswell, and we'll see how that does. Alternatively, a technological breakthrough such as quantum computing. For now, energy-efficient IPC is king.
minizanz134 karma
i have 3 questions
1) why did the solder for the IHS to die get replaced by low grade TIM on the IB? it seams to offer nothing possessive for the consumer, not save much if any money for intel, and the TIM completely drys when exposed to open air in a few hours, so it will not last the life off the die and should have the similar 2-3 years that you get with most heat sinks (even if it is 2x that it would not be long enough.)
2) why do the K edition chips not support VT-d, all off the non K chips support it, but the K do not? as some one who likes to overclock and test server OS builds this seams like it may be a problem eventually.
3) why are there no full i7 chips on socket 2011? all of the chips are broken xeons, they do not have all cores enabled, and they do not have all of the cashe enabled. and a side question on this topic, why are all of the xeons locked, the top sku or two used to be unlocked?
jecb148 karma
1) I do not know the details, but please know that nothing at Intel gets done today if there were no advantages (in cost or reliability or something else I don't know of). Personally, on parts that need overclocking I take it off and replace with something else, for parts that don't I don't care.
2) I had not noticed this. I don't work on the SKUing, but I agree with your reasoning. I'll make a case for it thanks for bringing it to my attention.
3) There will be one Haswell variant that is going to attempt to fix this break up of the high end desktop from the mainstream. But to answer your question as honestly as I can, I don't think it's the case about the cores or the cache. The default SKU for Jaketown is 6 cores and 15MB cache. Let me know otherwise.
jecb174 karma
We're making an really big effort to make it happen for you. What is sexy to you? I'll make sure to put some extra goodness there.
jecb125 karma
It has to be design rule clean. Otherwise the fab won't manufacture it. It's extremely hard to do.
qazplu3363 karma
It has to be design rule clean.
What exactly does that mean? That it must serve some ostensible purpose otherwise it's just wasting space and transistors and such?
jecb142 karma
No, it means that it has to be manufacturable in the lithography. There are certain rules the fab gives the design team to avoid things like opens, shorts, thinning, contamination, reliability, density, and others. We follow these rules to come the closest to a 100% yield; meaning how many individual chips of all those manufacturer have no defects.
These rules are extremely complicated. So doing anything other than generally straight wires, in a pre-defined grid is asking for trouble.
jecb221 karma
Actually, it has all of the colors in the visible spectrum at once when you look at the die.
larghetto104 karma
How often do you get feedback from software developers concerning possible improvements in the architecture?
jecb176 karma
Often. We get benchmark traces even more often. Google and Microsoft are some of the most prolific. Google on power-perf and Microsoft on compatibility issues.
jecb120 karma
I'm yearning for the day when I get to use them in a CPU. But not in my near future.
edwin_on_reddit91 karma
For a class project we dissected the PCB of a 2010 model phone (HTC Incredible). To do that we cross sectioned and x-rayed many of the chips to inspect them. One of the mysteries we encountered was explaining the layout of the BGA of this chip. The footprint of the package takes up roughly the size of the frame of that picture. The BGA's partially populated square center pattern was typical of many other chips this size. However the parenthesis-shaped "arms" were a very strange shape, but we had no idea why. Would you care to hazard a guess? This is a Hynix 8 Gb memory chip. I will edit this comment to include a spec sheet shortly.
jecb107 karma
I'm a sucker for X-ray micrographs! They might be to bond the edges of the PoP or used as TSVs. Alternatively, they might have no function at all.
edwin_on_reddit42 karma
I can't pull up the datasheet, but the part number is h26m44001car. There were no TSVs in this chip (that we knew of). 1st level interconnects were Au wirebonds that went from the stacked silicon to the chip carrier. We also thought that they might be to anchor the chip down, but then again, why would you want to do that? Putting BGA balls farther away from the neutral axis will increase thermal stress on them. Wouldn't the underfill alone suffice to anchor the chip at the edges?
jecb72 karma
All good theories, I really don't know. Let me see if anyone at the office knows.
Famzilla88 karma
Just a few questions:
What are the PC's inside intel that you guys use on a day to day basis like? Are they using hardware that has not been released to the public?
Are there working prototypes of future processors that aren't even supposed to come out for a couple more years (such as broadwell)?
As a person who just bought an Ivy-bridged based system, is there anything you can tell me to convince myself to save up for a haswell or broadwell system?
jecb169 karma
Yes and no. For day-to-day, our laptops have regular Ivy Bridge processors. Most of the heavy lifting though happens in our datacenters. There we do have cherry picked, specially-fused parts to run our high compute workloads. The most interesting scenario I can recount is we used our Haswell A0 silicon to tape out the subsequent steppings, thus "validating" it.
Yes. Many in the labs, used in publications. For products, we have things in the labs well in advance because it takes time to get them up to snuff.
What do you usually do with your system? If you like to overclock, Haswell is worth it (can't tell you why but read the Haswell Anandtech preview very carefully for buried treasure). On-die graphics is improving quite a bit as well. If you're into energy efficiency or even more graphics, Broadwell. I think the tech community will be very pleasantly surprised with Broadwell. But I'm biased, so we'll just going to have to prove it the hard way.
domestic_dog84 karma
How do you think the near future (2015-2020) is going to turn out, considering the massive headaches involved in feature shrink below nine or so nanometers? What is the most likely direction of development in that timeframe - 3d nanostructures? New substrate?
Given that a ten nm process will easily fit five if not ten billion transistors onto a consumer-sized (< 200 mm2) die, how are those transistors going to be used? Multicore has worked reasonably well as a stopgap after the disasterous Netburst architecture proved that humans weren't smart enough to build massive singlecore. Do you foresee more than six or eight cores in consumer chips? Will it just go into stupid amounts of cache? Will it be all-SoC? Will the improvements be realized in same-speed, low-power chips on tiny dies?
Intel has historically been the very best when it comes to CPUs and the worst when it comes to GPUs. As a casual industry observer (albeit with a master's in computer architecture), it seems improbable that many competitors - if any - will be able to keep up if Intel can deliver the current roadmap for CPUs. So how about those GPUs? Larrabee was a disaster, the current Atom PowerVR is an ongoing train wreck. Can we expect more of the same?
kloetersound31 karma
Seconding your first question, it seems like intel already has issues getting 14nm to work. (delays, rumors about moving to fully depleted SOI) See comments in http://www.eetimes.com/electronics-news/4400932/Qualcomm-overtakes-Intel-as-most-valued-chip-company
jecb91 karma
I don't know exactly, except for 450mm wafers sometime that is too far ahead for me. 14nm is hard, I won't deny it. But it's hard for everyone.
Certainly they're not going to be all switching at the same time. For consumer chips, I do not see a core explosion or cache explosion. all-SoC is an option, but it doesn't make too much business sense for some projects to all be in one die. And I have ideas where the improvements will come from, but sadly for you, someone else pays me for exclusive rights to those ideas.
Let me start by addressing PowerVR. What Atom has is a higher-clocked version of what the A*'s had. The problem we're fixing is that it's a couple of generations behind. That'll change. Larrabee is not a graphics part any longer, but it is a GP compute engine and that seems surprisingly good. Finally, we're working hard to keep our IPC lead while staying power efficient and we're devoting the area resources to the GPU. The drivers are improving and the current perf/mm2 of Gen is good enough to challenge any graphics architecture.
myjerkoffaccount81 karma
I'm currently in my third year undergrad computer engineering. I feel like I don't know enough to even begin to do work like you're doing. How do you learn enough to be able to design such advanced components?
jecb174 karma
What you do in school is orders of magnitude less complicated than what you see sold in stores. The key is a great team and willingness to spend many months learning. Sorry, this cannot be sugar-coated.
flatfeet80 karma
Is it true that the Sunnyvale Frys makes you take off your Intel badges so you don't get in fights with AMD employees?
I heard that somewhere a long time ago and always wondered if it was legit.
jecb185 karma
I don't know, but I usually don't wear my badge outside of work. I'm dorky, but there are limits.
jecb152 karma
Great question. The group I specifically work with is very diverse, about 1/1 male-to-female ratio. And we have people from all parts of the US, and many countries. There are the stereotypical ones you would think like India and China. But we also have multiple people each from Mexico, France, Russia, Romania, Nigeria, Spain, Norway, Bangladesh, Malaysia, England, Egypt, Costa Rica, Canada, Israel, Laos, and many more.
theidleprophet67 karma
What's the best way to upgrade my motherboard to ensure the longest life for its socket? That is to say, do some CPu socket types have longer lifetimes than others?
Unrelated, but what education programs did you go through to get the skills required for what you do?
Thanks for taking the time to do this AMA. I'm excited to hear more about what you do.
jecb100 karma
LGA1150 is the next socket. I hope it's good until at least 2015, but we need to respond to changing markets and so...
I hold a BS and MS in Electrical Engineering from what some might call a "prestigious" university. But across the team it varies from technician degrees to PhD.
Ask any more questions you like.
xpansive59 karma
Where do you think the biggest performance gains will come from in the future? Will it be higher clocks, more cores, even more complex instruction sets, etc.
jecb75 karma
This is a very personal take on the state of things, so don't put too much stock into it. As Intel now competes with the ARM ecosystem, their release cadence is shortening and therefore jumps will be less significant. If you're power constrained, higher clocks will not be too smart of a move. And it's very expensive effort-wise to get higher clocks. More cores, other than in graphics, is unlikely to go beyond 4 for user applications; for servers, it'll keep increasing. More complex instruction sets, sure ARMv8 is more complex than ARMv7.
MagmaiKH54 karma
Does Intel have any plans to make graphics chips this millennium?
(No, those don't count.)
jecb93 karma
On-die, are you willing to pay for the die area? I suggest you look the perf/mm2 and perf/W of our Gen graphics. We're working very hard to improve Windows and Linux drivers to compliment the hardware. If you're expecting discrete graphics, then you'll be disappointed.
airencracken45 karma
Do you see much of a future in Itanium?
Do you find other architectures besides x86/x86_64 interesting for any particular reasons? (MIPS, ARM, POWER, SPARC, etc)
Now that BIOS is going away, how much more life do you think is in ATX? BTX didn't go over so well.
How do you feel about uefi in general, there are parts I like and parts I dislike, I really wish Intel had backed coreboot.
How do you see Intel's role in the larger Free Software community?
edit: What do you think of VIA (someone already asked about AMD)?
I've been an AMD guy for several years, but my latest laptop has a Sandy Bridge i5 in it and I'm quite pleased with it. I hadn't had an Intel processor since my Pentium II.
jecb48 karma
If Xeon gets all of the RAS features that Itanium has, it's my personal opinion that no new development of hardware will happen. But there will still be Itanium machines in the wild for a long, long time.
Yes. For example, because of OpenSPARC many academic papers use it. For ARM the answer should be obvious, but today we care about things that most people don't such as memory ordering modes. Xbox360 has powerPOWER and some of the most powerful and highest frequency designs use it so gotta keep an eye on it.
ATX/BTX are standards, so as long as OEMs manufacture they'll still be around.
From my point of view, looking from below, it makes no difference whether we have UEFI or coreboot. Our processors work with both. This is more of a question for firmware folks...
To be perfectly frank, I'm very proud that Intel contributes heavily to the Linux kernel and other Free Software. Sometimes, some parts not affiliated with the Open Source Technology center run in antithesis to that ideal. Talking candidly to the people involved, it really comes down to not being in violation of other corporate agreements. Make of that what you will.
We do not hear much about VIA, last I heard was Isaiah.
I'm glad you like the Sandy Bridge.
theidleprophet38 karma
Sorry for so many questions. This will be my last for tonight, since I'm falling asleep.
If there was any realistic thing you could change about your job, what would you change?
freebasen36 karma
I've been told that Intel CPU's are still completely hand laid out. Is this true? Do you see Intel transitioning blocks to Place & Route for CPUs in the near future? Does intel use a custom toolset or vendor supplied like synposis, cadence, etc...?
jecb106 karma
All of the analog circuitry, arrays, and performance-sensitive parts are definitely hand-drawn (schematics) and hand laid out. We're one of the few places that actually still do this (apparently Apple does too). You can tell which parts were laid out by hand if you look at die photos.
As for tools, we definitely use many custom tools. For some things we do use Synopsys and Cadence tools when it doesn't make sense to develop internally. But, I think more interesting, is we develop on Linux systems and use all sorts of Open Source software.
pheonixblade959 karma
huh, I used Cadence to design this, what level of education does a VLSI layout engineer usually need?
jecb83 karma
You did your own library layout! That is better than most people we interview. Make sure you know your basics extremely well otherwise you'll crash and burn.
eldeveloper32 karma
Is being from a "prestigious" university important to get a job at one of the big companies? like Intel?
jecb61 karma
Not in my group. Nor is your degree. But we do have data on which graduates tend to do better and so we try to actively target those schools.
jecb98 karma
Where I work we don't speak marketing. We use code names. But to answer you with something, the Haswell microarchitecture will come out through 2013. We have many variants of it and I don't yet know what moniker they'll be branded with.
jecb33 karma
Wearable computing is an obvious one. Natural interaction with computing devices. Computing everywhere.
Depends on the complexity of the feature. We call this "late binding" and it has to pass a series of acid tests.
Gan3b26 karma
What's your stand on the new rumour doing rounds that low/mid tier Haswell cpu's will be soldered to the motherboards making them non-changeable?
If it turns out to be true, won't the same happen to high end cpu's to save on costs?
jecb46 karma
This rumor is likely misinterpreting facts or based on really incomplete information. Many of the variants will be BGA packages for certain form factors but not all. In my mind, if you lose customers by offering less choice, we did not save anything. But I'm not in sales or marketing.
MagmaiKH22 karma
How does designing the next generation Intel chip compare to designing an IC with verilog/VHDL tools? How different and further evolved is it?
jecb37 karma
We use SystemVerilog for design. e and SVTB for validation. C and C++ for our architectural simulator and our own languages for the microcode in the ROMs and RAMs of the processor.
The basics are simple, but a billion transistor processor requires many lines of HDL and just as many lines of test code.
Like_20_Ninjas19 karma
I have been hearing carbon nanotubes touted as the next big step in computer hardware. Can you give me your professional opinion on this statement and what it means? I am an enthusiast and am not very knowledgable otherwise.
Thanks so much for your AMA!
jecb28 karma
To hard to manufacture reliably today so hard to build circuits. The best workarounds I have seen on this come from the Robust Systems Group at Stanford, but still a ways to being able to use this in high volume manufacturing.
GoGhost18 karma
I have a couple questions.
How does intel get ideas for what features to implement into the processors, and how do they prioritize the implementation and research of said ideas?
I only ask because I founded a software company that heavily relies on CPU speed for our software to run correctly. We are working on a new animation technology related to 3D rendering. It would be great if we could somehow have an impact on future processor releases :)
jecb27 karma
For features, the project members propose ideas, marketing proposes ideas, and our big customers propose ideas. We develop them for a while, and then whittle them down to the subset that fits our die area and schedule targets. Sorry for the oversimplification, but that is it in a nutshell.
Regarding 3D rendering technology, for Haswell we gathered feedback from Pixar, Dreamworks, and ILM (that I know of).
Somthinginconspicou16 karma
Two questions thanks.
1.So to be that guy, what's your opinion on AMD?
2.I remember reading about an 80 core CPU you guys were testing a few years ago, any likely time-frame on when that sort of technology comes from the testing stage to actually buy-able by your average consumer?
Thank you very much.
jecb29 karma
See above.
Most likely never. These are Intel Labs projects where product groups pick and choose aspects to build into other products.
MLBfreek3515 karma
How good is MIPS for teaching computer architecture? I took a class where we learned the implementation of a simple MIPS machine and I was wondering if I actually learned anything useful.
jecb23 karma
That is the first processor I programmed in my first computer architecture course as well. It is useful, though modern designs have evolved way beyond it.
wellonchompy13 karma
Thanks for the AMA, I spend all of my work hours working out how to wring the highest performance out of your work.
I'm a Linux engineer involved in very low-latency systems, where fast single-threaded performance and massive core counts are critical to what I do. We've just moved our platform from AMD to Intel Sandy Bridge-based Xeons after the disaster of Bulldozer, and have been very pleasantly surprised with the performance of the Sandy Bridge Xeons. The E5-2690 is one amazing chip, with 8 cores at 2.9 GHz that happily burst to 3.8 GHz for the fastest single-threaded performance I've ever measured in a general purpose CPU (although we've had FPGAs go faster).
Using AMD systems, we used to be able to comfortably run 48 discrete cores in a single system (4x 12-core chips), which was fantastic for the tasks we run, where latency of IPC between NUMA cores is still orders of magnitudes lower than for network IPC. However, Intel still don't have anything on the market that approaches this core density at the cost or speed of the 2-year-old AMD chips, so I have a couple of questions:
- What's the reason that Xeon chips have a low core count compared to AMD? 8 cores per socket feels a bit restrictive when the ARM SoC in my phone already has 4.
- I know that SMP is tricky, and NUMA must be hard to do well (no thanks to operating system schedulers being obtuse about it), but is there a technological reason that we don't see the fastest cores available in 4-socket (or more) setups? Like I said earlier, I love the E5-2690, but the 4-socket versions only go up to E5-4650 at 2.7 GHz, with only 3.3 GHz turbo.
- I guess this is probably more to do with marketing and SKUs, but why do the 4-socket versions of chips cost twice as much as the 2-socket versions? Related to the previous question, are they physically different, or are they artificially locked to 2-socket setups for marketing reasons? With AMD, we'd get exactly the same Opteron chip whether it was for a 1, 2 or 4-socket setup.
jecb9 karma
Thanks for the praise and the informative intro.
We target a certain amount of performance at the lowest cost which means saving die area and using SMT.
It takes a lot more validation because those usually go into mission-critical systems. So the refresh cycle is slower.
To first approximation because of the IOs, the chip itself is bigger and also takes more time to validate.
robreddity12 karma
Moblin -> Meego -> Bada -> Tizen
How were/are any of these going to sell chips?
jecb22 karma
In my opinion, not for any application you're interested in. It's an Android, Windows, and iOS world.
Tizen is being used in cars for IVI.
grkirchhoff12 karma
When can we expect to see chips with a 3d substrate hit the market? What are the biggest challenges in creating such a chip? Do you think it is realistically going to happen, or is it another one of those "this COULD happen" things that never come to fruition?
jecb25 karma
3D substrate like tri-gates or 2 or more layers of devices on the same silicon. I do think these will happen. Obvious challenge is vias and wires, power delivery, and increased power and device density.
[deleted]12 karma
Graphics Media Accelerator
Is Intel focused on really developing these technologies? Would you expect it to replace lower to middle range video cards in the near future?
jecb16 karma
Now I follow. Yes, we're developing the Media Engine alongside the Gen graphics. Big leaps coming in Broadwell.
MagmaiKH630 karma
How do you feel about AMD? (No really, let it out :))
View HistoryShare Link