sql - Multi-row, instead of single row data transformation with trigger in MYSQL -


i have query:

create trigger move_form_data after insert on schema.original_table each row insert schema.new_table (name, street_address,              street_address_line_2, city, state, zip, country, dob) select name, street_address, street_address_line_2, city, state, zip, country, dob  view_data_submits 

with calls view:

create view view_data_submits   select           max(case when element_label = 0 element_value end) name,         max(case when element_label = 1 element_value end) street_address,         max(case when element_label = 2 element_value end) street_address_line_2,         max(case when element_label = 3 element_value end) city,         max(case when element_label = 4 element_value end) state,         max(case when element_label = 5 element_value end) zip,         max(case when element_label = 6 element_value end) country,         max(case when element_label = 7 element_value end) dob schema.original_table group_id = (select max(group_id) schema.original_table) group group_id 

i want 1 row back, , trigger works intended without trigger part code:

insert schema.new_table (name, street_address,                  street_address_line_2, city, state, zip, country, dob)     select name, street_address, street_address_line_2, city, state, zip, country, dob      view_data_submits 

currently, give me inserted row when user submits form, transforms original table new table this:

# id, name, street_address, street_address_line_2, city, state, zip, country, dob 2, fsa asdadfq, , , , , , ,  3, fsa asdadfq, boogyboogyboogy, , , , , ,  4, fsa asdadfq, boogyboogyboogy, youdoowork, , , , ,  5, fsa asdadfq, boogyboogyboogy, youdoowork, a, , , ,  6, fsa asdadfq, boogyboogyboogy, youdoowork, a, dd, , ,  7, fsa asdadfq, boogyboogyboogy, youdoowork, a, dd, 09876, ,  8, fsa asdadfq, boogyboogyboogy, youdoowork, a, dd, 09876, belize,  9, fsa asdadfq, boogyboogyboogy, youdoowork, a, dd, 09876, belize, 2014-02-05  <--only row want (=the total form submission) 

instead of just:

# id, name, street_address, street_address_line_2, city, state, zip, country, dob  9, fsa asdadfq, boogyboogyboogy, youdoowork, a, dd, 09876, belize, 2014-02-05 

i have feeling either for each row syntax, or application saves in compounding fashion somehow. leaning towards first one.

anyone have suggestions remedy? feel though noob mistake forgot about....haha.

~~edit per request:

here select * original table max id being pulled:

# id, form_id, element_label, element_value, group_id ----+--------+--------------+--------------+---------  207,       2,             0,          name,       25  208,       2,             1,     address 1,       25  209,       2,             2,     address 2,       25  210,       2,             3,          city,       25  211,       2,             4,         state,       25  212,       2,             5,           zip,       25  213,       2,             6,       country,       25  214,       2,             7,           dob,       25 

since values blob form, replaced values represent, pulled newest inserted data

this looks eav schema (oh! joys!).

it looks root problemis application isn't inserting "row" way want see it; it's inserting multiple rows same table, each row representing single attribute value.

the application using entity-attributute-value (eav) model, , want row looks traditional relational model.

what rather ugly "max(),max(),max() ... group by" query doing converting eav rows columns of single row.


it looks want conversion "on-the-fly" , maintain contents of target_table whenever rows inserted original_table.

if solving problem, include group_id in target_table, since that's value relating individual eav rows (as demonstrated in view query.)

and not use select max(group_id) query reference value on row inserted original_table. in context of after insert trigger, have group_id value of row inserted; it's available me "new.group_id".

(the real reason avoid using max(group_id) query value don't have guarantee other process isn't going insert larger value group_id while process running. i'm not guaranteed max(group_id) return value of group_id inserted. (granted, won't ever see problem happen in single user testing; i'd have include deliberate delays in processing, , have 2 processes running @ same time in order happen. 1 of problems pops in production, rather in testing, because don't bother setup test case discover problem.)

if want single row in target_table each group_id value, create unique constraint on group_id column in target_table. use "upsert"-type function update row if exists, or insert row if 1 doesn't exist.

i can mysql insert ... on duplicate key ... statement. requires unique constraint, have covered. 1 downside of statement if target_table has auto_increment column, "burn" through auto_increment values when row exists.

based on have in trigger/view, this:

insert target_table (group_id, name, street_address, ... ) select o.group_id        max(case when o.element_label = 0 o.element_value end) name,        max(case when o.element_label = 1 o.element_value end) street_address,        max(case when o.element_label = 2 o.element_value end) street_address_line_2,        max(case when o.element_label = 3 o.element_value end) city,        max(case when o.element_label = 4 o.element_value end) state,        max(case when o.element_label = 5 o.element_value end) zip,        max(case when o.element_label = 6 o.element_value end) country,        max(case when o.element_label = 7 o.element_value end) dob   schema.original_table o  o.group_id = new.group_id  group o.group_id     on duplicate key update name                  = values(name)      , street_address        = values(street_address)      , street_address_line_2 = values(street_address_line2)      , city                  = values(city)      , state                 = values(state)      , zip                   = values(zip)      , country               = values(country)      , dob                   = values(dob) 

note i'm counting on unique constraint on target_table(group_id) throw "duplicate key" exception when attempts insert row group_id value exists in target_table. when happens, statement turn update statement, implied where group_id = values(group_id) (whatever columns involved in unique key constraint violation.)

this simplest approach, long burning through auto_increment values isn't concern.

i'm not limited insert ... on duplicate key statement, can "roll own" upsert function. but... want cognizant of possible race conditions... if perform select , subsequent insert, leave small window process can sneak in...

i instead use not exists predicate test existence of row:

insert target_table ( ... select ...   original_table o  o.group_id = new.group_id    , not exists (select 1 target_table d d.group_id = new.group_id) 

then i'd test whether row inserted (by checking number of affected rows), , if no row inserted, attempt update. (i'm banking on select statement returning single row.)

for better performance, might use anti-join pattern same check (for existence of existing row), 1 row, not exists (subquery) fine, , think it's easier understand.

insert target_table ( ... select ...   original_table o   left   join target_table t     on t.group_id = new.group_id  o.group_id = new.group_id    , t.group_id null 

(that select original-table might need wrapped inline view, since it's referencing same table that's being inserted. turning query derived table should fix that, if problem.)


i said "could" use query view in trigger. that's not approach i'd choose use. it's not necessary. don't need run max(), max(), max() query every column.

i have values of row being inserted original_table, know element_label being inserted, , there's 1 column has changed in target_table. (do want max(element_value), or want value inserted?)

here's approach use in trigger. i'd avoid running query against original_table @ all, , upsert on 1 column in target_table:

if new.element_label = 0    -- name    insert target_table (group_id,       `name`)     values (new.group_id, new.element_value)    on duplicate key update                   `name` = values(`name`); elseif new.element_label = 1    -- street_address    insert target_table (group_id,       `street_address`)     values (new.group_id, new.element_value)    on duplicate key update                   `street_address` = values(`street_address`); elseif new.element_label = 2    -- street_address2    insert target_table (group_id,       `street_address2`)     values (new.group_id, new.element_value)    on duplicate key update                   `street_address2` = values(`street_address2`); elseif new.element_label = 3    -- city    insert target_table (group_id,       `city`)     values (new.group_id, new.element_value)    on duplicate key update                   `city` = values(`city`); elseif new.element_label = 4    ... end 

i know that's not pretty, think it's best approach if maintenance of target_table has done @ time rows inserted original table. (the problem isn't database here, problem eav model, or really, "impedance mismatch" between eav model (one row each attribute value) , relational model (one column in each row each attribute value).

this isn't uglier max(),max(),max() query.

i ditch auto_increment id in target table, , use group_id (value original_table) primary key in target_table, since want 1 row each group_id.


update

you have change delimiter semicolon else when trigger body contains semicolons. documentation here:http://dev.mysql.com/doc/refman/5.5/en/trigger-syntax.html

e.g.

delimiter $$  create trigger trg_original_table_ai after insert on original_table each row begin    if new.element_label = 0       -- name       insert target_table (group_id,       `name`)        values (new.group_id, new.element_value)       on duplicate key update                   `name` = values(`name`);    elseif new.element_label = 1       -- street_address       insert target_table (group_id,       `street_address`)        values (new.group_id, new.element_value)       on duplicate key update                   `street_address` = values(`street_address`);    end if; end$$  delimiter ; 

Comments

Popular posts from this blog

Android layout hidden on keyboard show -

google app engine - 403 Forbidden POST - Flask WTForms -

c - Why would PK11_GenerateRandom() return an error -8023? -