Looking at the much-more complete
RocketNews24 article (News9, how do you manage to be worse than RocketNews24?), the silver lining to this is that it rather appears to be a marketing ploy for the mentioned show, albeit a distastefully ill-conceived one. No matter how the AI feels (or 'feels'), if she learned to program her blog to perform the effects detailed in the article and seen in the screenshots, that would be the bigger story here.
Nevertheless, it raises some damned good questions about the ethics of developing AI who may not want to be our little museum curiosities. This may be a marketing ploy today, but what even does
that say about the creators of the AI—and what happens when (and don't kid yourself, it very much is a case of 'when') an AI
does develop depression,
does have thoughts of suicide? I don't want to live in a world that creates beings that want to die.