Point taken! But do think of the astonishing potential for savings in healthcare costs if this remarkably capable AI tech could enable your non-expert nosy neighbors to successful prompt-engineer an only slightly knobby 3-D printed neurosurgery of intricate nature for your invasive in-laws … an opportunity not to be missed! A most tantalizing prospect! 8^b
]]>I find it unfortunate that the GTech authors chose to refer to LLMs with locutions that are more typical of sales-pitches than of scientific and engineering investigative discourse. This gives the reader an impression that the study has been carried out in a biased, naive, or uncritical manner, shadowing the potential seriousness and adequacy of the methods, and diluting the potential impacts of the results. In this reviewer’s opinion, expressions such as “remarkable capabilities”, “intricate nature”, “amazing capacity”, “tantalizing prospect”, “astonishing potential”, and others (eg. “vital insights”), should be appropriately sedated to better match the target audience of intelligent professionals.
The paper should also clearly state the conclusion that results from its investigation, namely that LLMs cannot be used by non-experts to design AI accelerators, but, in the future, they may possibly be used by experts to assist in various aspects of accelerator design. Expertise is needed to “expertly” split the design into consistent functional subsystems that an LLM may then perform additional work on. Expertise is also needed to prompt engineer the LLM to aid it perform its targeted support role. Expertise is further needed to develop and provide the HLS examples required for model training. At present, the authors’ proposed pipeline for LLM-assisted EDA effectively inverts the assistance relationship by requiring experts to assist the LLM, in the eventual hope that LLMs would later assist non-experts do a job that is likely not suited to them.
]]>I could not force myself to type GPT4AIGChip. I tried.
]]>When this happens then, yes, I’ll agree that we haven’t lost our collective m&m marbles over ML, and that AI actually has more smarties in its bag than we peoples have fig Newtons! “Time will tell” as concluded in this TNP article, particularly because, as future AI figured out time-travel for us, I’ve been able to come back here with great precision and edit this comment to accurately reflect the related consequences (or not? ahem!). Note also that time-travel is a known contributing factor for dyslexia … or orthograph-ic wave dispersion (8p^) … exircese cuati!on
I might be the complete cucurbita party-pooping curmudgeon of the gourd family at this temporal juncture, but my opinion will surely change, in the past, with future advances in AI-mediated time travel! (or not?) ^8p 8d^ 8^p
P.S. GIT study looks interesting, except maybe the unpronounceable 11-letter klingon mixed-acronym mouthful: GPT4AIGChip.
]]>