為什麼IBM 7030或IBM 360同時使用字節和字尋址


2

在1950年代,機器有一個 36位單詞。在這個 word 中,我們可以使用 6位來打包符號。而要從 word 程序員那裡獲取這些符號,應該在using bit manipulations之前完成。

1961年,IBM發布了7030機器,該機器也具有word addressable memory。另一方面,這台機器是第一台使用 byte 8位的機器。

關於"為什麼需要一個 byte 術語"的第一個假設如下:

如果這 8位是可尋址的,我們可以從內存中存儲並獲取任何 8位符號,而無需進行任何位操作。但是,如果內存是 word可尋址的,那麼我的假設就會產生一個矛盾:如果內存是 word可尋址的,我們如何尋址單個 byte

所以我的問題是:如果內存是 word可尋址的,而在IBM 7030中使用術語 byte 有什麼意義呢?字節?

6

'Byte' was used before it meant '8 bits' and It was used on machines that had word-addressable memory. The point was for the program to be able to read and write data of less than a word size.

The PDP-10, with which I am familiar, had "load byte" and "deposit byte" instructions which could read any part of a word, sized from 0 to 36 bits. As far as memory was concerned, whole words would be transferred; the ALU would handle the shifting and masking required to extract/insert the designated field. Since memory was word-addressable, a special address format was required for these instructions - it had a standard address, and also offset-in-word and size-of-byte fields (6 bits each).

The byte instructions were heavily used, for example, for manipulating text. The standard format was 7-bit bytes (ASCII encoding) packed 5 to a word, with one bit left over. The byte instructions made it trivial to process such a format.

Absent such instructions, the programmer would have had to write the and mask instructions himself.

Stretch (the 7030) was similar. It had a standard 18-bit address for word operations, and an extended format that specified the exact bitfield on which to operate. As with the PDP-10, the term 'byte' is used regardless of the bit count of the field.

'Individual characters, or bytes, in a field may also be varied in length' -- see the section on Integer Arithmetic in the 7030 ref manual. The terminology is slightly different from the PDP-10; it seems words were divided into fields and fields were divided into bytes of 1 to 8 bits.


10

[Please see as well this answer, as it's kind of an extension]

Why did IBM 7030 or IBM 360 use byte and word addressing simultaneously

Not sure what's with /360 reference here, as it's uses byte addressing (*1).

The 7030 in contrast used word and bit addressing.

Word addressing of 18 bit when it was about words, and bit addressing in the form of a 24 bit address when it was about 'bytes'. A word address is always stored in the first 18 bits of an instruction word (32/64 bit), followed by 6 bit holding the bit address for instructions accessing chunks of a word (a majority). Most of them contained as well the bit length (up to 64). (*2,*3)

In 1950s machines had a 36 bit words.

Some had, others had different word sizes, like 18, 22 or 72.

And in this word we could pack symbols using 6 bits. And to fetch this symbols from the word programmer should do it using bit manipulations.

Some supported 6 bit sizes, other none or arbitrary.

In 1961 IBM released 7030 machine that also had word addressable memory. On the other hand, this machine was the first one which used term byte with 8 bits.

No. While Werner Buchholz did coin the term Byte when working on the 7030, it didn't describe a unit of 8 bit, but a variable size definition for a bit group of 1 to 8 bits (*4), defined in each instruction as needed. Others took it the same way, simply as an arbitrary number of bits anywhere between 4 and 9 (*5), depending on machine and/or language, 'bitten' out of a word.

If this 8 bits were addressable, we could store and get any 8 bit symbol from memory without any bit manipulations. But if memory was word addressable my assumption rises a contradiction: how we could address a single byte if memory was word addressable?

By extracting that byte? A byte access in a 7030 consists of a word address and a byte address within. Byte access was done by using a data description (dds) made up of mode, field length and byte size

So my question is: what was the point of using term byte in IBM 7030, if memory was word addressable and we can not get any benefits from byte?

Byte in the 7030 sense isn't a fixed sub unit of a word like today, but a variable length unit within a variable length fiel within a word.

At that point it may be helpful to peek into the 7030 Assembler manual, as about half of it is dedicated to data definition and how this is handled. It's worlds away from what is canon today and way more like some HLL (*6).


*1 - Another standard IBM did set. All addressing was bytewise, independent of word length or position. It simplified memory access to what we now take for granted.

  • All addresses had the same format and size
  • All were byte addresses
  • No address calculation between data types needed
  • Halfwords and words are fixed length collection of fixed size bytes
  • The only speciality of words and halfwords was that they had to be alligned to their size.

All this we take for granted today and many have a hard time to understand that the world before the /360 was different. More colourful but more chaotic as well.

*2 - Any length up to 64 bits could be used and bit addresses warped to word addresses. So it was not only possible to pack 3 21-bit integers into a word without any need for conversion, but as well spread such bitfields across words - like 30 12-bit integer over 3 words :)

*3 - I personally thing the object orientated bit-mongering of Intel's 432 was quite influenced by the way the 7030 handeled it's data.

*4 - The Assembler used default sizes of 1, 4 or 8, depending on the mode field, if none were given explicite or by data definitions.

*5 - Or simply used byte as a term for any bit defined chunk different from a word - as in "A Bite from a Word" - like the PDP-10s bit field operations handling variable length sections of 1..36 bit were as well called byte.

*6 - In some way it shows the quintessence of computer development between back then and today. In the old days computers were enhanced to offer instructions the programmer can use deliver the task, while today instructions are simplified and programmers work hard to please the compiler.