๋™์•„๋ฆฌ,ํ•™ํšŒ/Google Study Jam - GenAI

[Google Study Jam - Beginner] 03. Introduction to Responsible AI

egahyun 2024. 12. 30. 01:53

์ฑ…์ž„๊ฐ ์žˆ๋Š” AI ์‚ฌ์šฉ์˜ ์˜๋ฏธ

01. ์˜๋ฏธ

(1) Understand why Google has put AI principles in place

(2) Identify the need for a responsible AI practive within an organization

(3) Recognize that decisions made at all stages of a proect have an impact on responsible AI

(4)  Recognize that organizations can design AI to fit their own business needs and values

 

02. AI์™€ ์‚ฌ๋žŒ ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ : AI ์‹œ์Šคํ…œ

(1) Example ) ๊ตํ†ต, ๋‚ ์”จ ์˜ˆ์ธก / ๋‹ค์Œ์— ๋ณผ๋งŒํ•œ TV ํ”„๋กœ๊ทธ๋žจ ์ถ”์ฒœ ๋“ฑ

(2) ๋ฐœ์ „ ์ƒํ™ฉ : AI๊ฐ€ ํ”ํ•ด์ง€๋ฉด์„œ AI์— ๊ธฐ๋ฐ˜ํ•˜์ง€ ์•Š์€ ๋งŽ์€ ๊ธฐ์ˆ ์ด ๋ถ€์กฑํ•˜๊ฒŒ ๋А๊ปด์งˆ ์ˆ˜๋„ ์žˆ๋Š” ์ •๋„๋กœ, ๋น ๋ฅธ ์†๋„๋กœ ๋ฐœ์ „๋จ

(3) ํšจ๊ณผ : ์ปดํ“จํ„ฐ๋ฅผ ํ†ตํ•ด ์„ธ์ƒ์„ ๋ณด๊ณ , ์ดํ•ดํ•˜๊ณ , ์ƒํ˜ธ์ž‘์šฉ ๊ฐ€๋Šฅ์ผ€ ํ•จ

(4) ๋ฌธ์ œ : ์—„์ฒญ๋‚œ ๋ฐœ์ „์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ  AI๋Š” ์™„๋ฒฝํ•˜์ง€ ์•Š๋‹ค

  -  Responsible AI ์— ๋Œ€ํ•œ ๋ณดํŽธ์ ์ธ ์ •์˜ ๋ฐ ๊ด€ํ–‰์„ ๊ตฌํ˜„ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ •์˜ํ•˜๋Š” ๊ฐ„๋‹จํ•œ ์ฒดํฌ๋ฆฌ์ŠคํŠธ๋‚˜ ๊ณต์‹ X

  ⇒ Responsible AI ๋ฅผ ๊ฐœ๋ฐœํ•˜๋ ค๋ฉด ์ž ์žฌ์ ์ธ ๋ฌธ์ œ, ํ•œ๊ณ„ ๋˜๋Š” ์˜๋„ํ•˜์ง€ ์•Š์€ ๊ฒฐ๊ณผ์— ๋Œ€ํ•ด ์ดํ•ดํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•จ

 

03. ์ž์ฒด์ ์ธ AI ์›์น™ ๊ฐœ๋ฐœ

(1) Organization : Organization์˜ ์‚ฌ๋ช…๊ณผ ๊ฐ€์น˜๋ฅผ ๋ฐ˜์˜ํ•˜์—ฌ ์ž์ฒด์ ์ธ AI ์›์น™์„ ๊ฐœ๋ฐœ

(2) ํŠน์ง• : ์›์น™์ด ๊ฐ๊ฐ ๋‹ค๋ฅด์ง€๋งŒ ๊ณตํ†ต์ ์ธ ์ฃผ์ œ๋ฅผ ์ฐพ์„ ์ˆ˜ ์žˆ์Œ

  - ํˆฌ๋ช…์„ฑ + ๊ณต์ •์„ฑ + ์ฑ…์ž„์„ฑ + ๊ฐœ์ธ ์ •๋ณด ๋ณดํ˜ธ

 

04. ์‚ฌ๋žŒ์˜ ์˜์‚ฌ๊ฒฐ์ •๊ณผ Responsible AI

: ์ •์˜๋˜๊ณ  ๋ฐ˜๋ณต ๊ฐ€๋Šฅํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ํ•„์š”

(1) ์˜คํ•ด : ๋จธ์‹ ์ด ์ค‘์š”ํ•œ ์˜์‚ฌ๊ฒฐ์ • ์—ญํ• ์„ ํ•œ๋‹ค๊ณ  ์˜คํ•ด

(2) ์‹ค์ œ : ์‚ฌ๋žŒ์˜ ๊ฒฐ์ •์ด Google์˜ ๊ธฐ์ˆ  ์ œํ’ˆ ์ „๋ฐ˜์— ์–ฝํ˜€ ์žˆ์Œ

  ⇒ ์˜์‚ฌ ๊ฒฐ์ •์„ ๋‚ด๋ฆด ๋•Œ๋งˆ๋‹ค ๊ณ ๋ฏผ๊ณผ ํ‰๊ฐ€๊ฐ€ ์žˆ์–ด์•ผ๋งŒ ๊ฐœ๋…๋ถ€ํ„ฐ ๋ฐฐํฌ์™€ ์œ ์ง€๊ด€๋ฆฌ๋ฅผ ๊ฑฐ์น˜๋Š” ๋™์•ˆ ์ฑ…์ž„๊ฐ ์žˆ๊ฒŒ ์„ ํƒ ๊ฐ€๋Šฅ

  - ์‚ฌ๋žŒ์ด ์ด๋Ÿฌํ•œ ๋จธ์‹ ์„ ์„ค๊ณ„ํ•˜๊ณ  ๋นŒ๋“œํ•˜๋ฉฐ ์‚ฌ์šฉ๋˜๋Š” ๋ฐฉ์‹์„ ๊ฒฐ์ •

  - ๋ชจ๋ธ ํ•™์Šต ๋ฐ ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ + ์ƒ์„ฑ

  - AI ๋ฐฐํฌ์™€ AI๊ฐ€ ์ฃผ์–ด์ง„ ์ปจํ…์ŠคํŠธ์—์„œ ์ ์šฉ๋˜๋Š” ๋ฐฉ์‹์„ ์ œ์–ด

  - ์ž์‹ ์˜ ๊ฐ€์น˜์— ๋”ฐ๋ผ ์„ ํƒํ•˜์—ฌ ๊ฒฐ์ •์„ ๋‚ด๋ฆผ

Responsible AI ์— ๋Œ€ํ•œ Google

01. Google์˜ ์‹œ์„ 

: ์ž์ฒด AI ์›์น™๊ณผ ๊ด€ํ–‰, ๊ฑฐ๋ฒ„๋„Œ์Šค ํ”„๋กœ์„ธ์Šค, ๋„๊ตฌ๋ฅผ ๊ฐœ๋ฐœํ•˜๋ฉด์„œ Google์˜ ๊ฐ€์น˜๋ฅผ ๋ฐ˜์˜ํ•˜๊ณ  ์ฑ…์ž„๊ฐ ์žˆ๋Š” AI์— ๋Œ€ํ•œ ์ ‘๊ทผ๋ฐฉ์‹ 

(1) Built for everyone

(2) Accountable and safe

(3) Respects privacy

(4) Driven by scientific excellence

 

02. Google์ด ์ƒ๊ฐํ•˜๋Š” Responsible AI์˜ ํšจ๊ณผ

(1) ๋” ๋‚˜์€ ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ์Œ

(2) ๊ณ ๊ฐ๊ณผ ๊ทธ ๊ณ ๊ฐ์˜ ๊ณ ๊ฐ ๊ฐ„์— ์‹ ๋ขฐ ๊ตฌ์ถ• ๊ฐ€๋Šฅ

  ์‹ ๋ขฐ๊ฐ€ ๊นจ์ง€๋ฉด AI ๋ฐฐํฌ๊ฐ€ ์ง€์—ฐ๋˜๊ฑฐ๋‚˜ ์‹คํŒจํ•  ์œ„ํ—˜์ด ์žˆ์œผ๋ฉฐ ์ตœ์•…์˜ ๊ฒฝ์šฐ ํ•ด๋‹น ์ œํ’ˆ์˜ ์ดํ•ด๊ด€๊ณ„์ž์—๊ฒŒ ํ•ด๋ฅผ ๋ผ์น  ์ˆ˜ ์žˆ์Œ

 

03. Google์˜ AI ์›์น™

: ์ผ๋ จ์˜ ํ‰๊ฐ€์™€ ๊ฒ€ํ† ๋ฅผ ํ†ตํ•ด AI ๊ด€๋ จ ์ œํ’ˆ๊ณผ ๋น„์ฆˆ๋‹ˆ์Šค์— ๋Œ€ํ•ด ๊ฒฐ์ •ํ•˜๊ธฐ ์œ„ํ•œ ํ™•์ธ ๋ฐฉ๋ฒ•

 ์—ฌ๋Ÿฌ ์ œํ’ˆ ์˜์—ญ๊ณผ ์ง€์—ญ์— ์—„๊ฒฉํ•˜๊ณ  ์ผ๊ด€์„ฑ ์žˆ๊ฒŒ ์ ‘๊ทผ

(1) ์žฅ์  :๊ทธ๋ฃน์ด ๊ณต๋™์˜ ์•ฝ์†์„ ์ง€ํ‚ค๋Š” ๋ฐ ๋„์›€ ์คŒ

(2) ๋ฌธ์ œ์  : ๋ชจ๋“  ์‚ฌ๋žŒ์ด ์ฑ…์ž„๊ฐ ์žˆ๊ฒŒ ์ œํ’ˆ์„ ์„ค๊ณ„ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•œ ๋ชจ๋“  ๊ฒฐ์ •์— ๋™์˜ํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹˜

(3) ํ•ด๊ฒฐ๋ฐฉ๋ฒ• : ์‚ฌ๋žŒ๋“ค์ด ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ•๋ ฅํ•œ ํ”„๋กœ์„ธ์Šค์˜ ๊ฐœ๋ฐœ

  ์ตœ์ข… ๊ฒฐ์ •์— ๋™์˜ํ•˜์ง€ ์•Š๋”๋ผ๋„ ๊ทธ ๊ฒฐ์ •์— ์ด๋ฅด๊ฒŒ ๋œ ํ”„๋กœ์„ธ์Šค๋ฅผ ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋„๋ก

 

04. Google์˜ 7๋Œ€ AI ์›์น™ ( 2018๋…„ 6์›”)

  Google์˜ ์—ฐ๊ตฌ์™€ ์ œํ’ˆ ๊ฐœ๋ฐœ์„ ์ ๊ทน์ ์œผ๋กœ ๊ด€๋ฆฌํ•˜๊ณ  ๋น„์ฆˆ๋‹ˆ์Šค ๊ฒฐ์ •์— ์˜ํ–ฅ์„ ์คŒ

  1. AI should be socially benefical
    ⇒ ๊ด‘๋ฒ”์œ„ํ•œ ์‚ฌํšŒ์ , ๊ฒฝ์ œ์  ์š”์ธ์„ ๊ณ ๋ คํ•˜์—ฌ "์ „๋ฐ˜์ ์œผ๋กœ ์˜ˆ์ƒ๋˜๋Š” ์ด์ต > ์˜ˆ์ธก ๊ฐ€๋Šฅํ•œ ์œ„ํ—˜๊ณผ ๋‹จ์ "์ธ ๊ฒฝ์šฐ์ง„ํ–‰
  2. AI should avoid creating or reinforcing unfair bias
    ์ธ์ข…, ๋ฏผ์กฑ, ์„ฑ๋ณ„, ๊ตญ์ , ์†Œ๋“, ์„ฑ์  ์ง€ํ–ฅ ๋“ฑ ๋ฏผ๊ฐํ•œ ํŠน์„ฑ๊ณผ ๊ด€๋ จํ•˜์—ฌ ์‚ฌ๋žŒ๋“ค์—๊ฒŒ ๋ถ€๋‹นํ•œ ์˜ํ–ฅ์„ ๋ฏธ์น˜์ง€ ์•Š๋„๋ก
  3. AI should be built and tested for safety
    ์˜๋„์น˜ ์•Š๊ฒŒ ์œ„ํ—˜ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์ดˆ๋ž˜ํ•˜์ง€ ์•Š๋„๋ก ๊ณ„์†ํ•ด์„œ ๊ฐ•๋ ฅํ•œ ์•ˆ์ „ ๊ด€ํ–‰๊ณผ ๋ณด์•ˆ ๊ด€ํ–‰์„ ๊ฐœ๋ฐœํ•˜๊ณ  ์ ์šฉ
  4. AI should be accountable to people
     ์ ์ ˆํ•œ ํ”ผ๋“œ๋ฐฑ๊ณผ ๊ด€๋ จ ์„ค๋ช…, ์ด์˜์ œ๊ธฐ ๊ธฐํšŒ๋ฅผ ์ œ๊ณตํ•˜๋Š” AI ์‹œ์Šคํ…œ์„ ์„ค๊ณ„
  5. AI should incorporate privacy design principles
    ๊ณ ์ง€์™€ ๋™์˜ ๊ธฐํšŒ๋ฅผ ์ œ๊ณตํ•˜๊ณ  ๊ฐœ์ธ ์ •๋ณด ๋ณดํ˜ธ ์žฅ์น˜๊ฐ€ ์žˆ๋Š” ๊ตฌ์กฐ๋ฅผ ์žฅ๋ คํ•˜๋ฉฐ ๋ฐ์ดํ„ฐ ์‚ฌ์šฉ์— ์žˆ์–ด์„œ ์ ์ ˆํ•œ ํˆฌ๋ช…์„ฑ๊ณผ ์ œ์–ด๊ถŒ์„ ์ œ๊ณต
  6. AI should uphold high standards of scientific excellence
     ๋‹ค์–‘ํ•œ ์ดํ•ด๊ด€๊ณ„์ž์™€ ํ˜‘๋ ฅํ•˜์—ฌ AI ๋ถ„์•ผ๋ฅผ ์‹ ์ค‘ํžˆ ์ด๋Œ์–ด๋‚˜๊ฐ + ๊ณผํ•™์ ์œผ๋กœ ์—„๊ฒฉํ•˜๋ฉด์„œ๋„ ์—ฌ๋Ÿฌ ํ•™๋ฌธ์„ ๋„˜๋‚˜๋“œ๋Š” ์ ‘๊ทผ๋ฐฉ์‹์„ ํ™œ์šฉ
     ๋” ๋งŽ์€ ์‚ฌ๋žŒ์ด ์œ ์šฉํ•œ AI ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ฐœ๋ฐœํ•  ์ˆ˜ ์žˆ๋„๋ก ๊ต์œก ์ž๋ฃŒ, ๋ชจ๋ฒ” ์‚ฌ๋ก€, ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋ฅผ ๊ฒŒ์‹œํ•˜์—ฌ ๊ด€๋ จ ์ง€์‹์„ ๊ณต์œ 
  7. AI should be made available for uses that accord with these principles
     ๋งŽ์€ ๊ธฐ์ˆ ์ด ์—ฌ๋Ÿฌ ์šฉ๋„๋กœ ์‚ฌ์šฉ๋˜๋ฏ€๋กœ Google์€ ์ž ์žฌ์ ์œผ๋กœ ์œ ํ•ดํ•˜๊ฑฐ๋‚˜ ์•…์˜์ ์ธ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์ œํ•œํ•˜๊ธฐ ์œ„ํ•ด ๋…ธ๋ ฅ

05. Google์ด ์žํ–ฅํ•˜์ง€ ์•Š์„ 4๊ฐ€์ง€ AI ์‘์šฉ ๋ถ„์•ผ

  1. technologies that cause or are likely to cause overall harm
  2. weapons or other technologies whose principla purpose or implementation is to cause or directly facilitate injury to people
  3. technologies that gather or use information for surveillance that violates internationally accepted norms
  4. technologies whose purpose contravenes widely accepted principles of international law and human rights

 

05. ๊ฒฐ๋ก 

: ์—ฌ์ „ํžˆ AI ์›์น™๋งŒ์œผ๋กœ๋Š” ์ œํ’ˆ์„ ์–ด๋–ป๊ฒŒ ๋นŒ๋“œํ•  ๊ฒƒ์ธ์ง€์— ๋Œ€ํ•œ ์ง์ ‘์ ์ธ ๋‹ต์„ ์ฐพ์ง€ ๋ชปํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, AI์›์น™์— ๋Œ€ํ•œ ๋…ผ์˜๋ฅผ ํ”ผํ•˜์ง€ ๋ง์•„์•ผํ•จ